From patchwork Thu Dec 7 05:21:04 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Appana Durga Kedareswara rao X-Patchwork-Id: 10097653 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DE15560325 for ; Thu, 7 Dec 2017 05:37:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CF24128BE8 for ; Thu, 7 Dec 2017 05:37:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C27572A32C; Thu, 7 Dec 2017 05:37:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAD_ENC_HEADER,BAYES_00, DKIM_SIGNED, DKIM_VALID, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E78F728BE8 for ; Thu, 7 Dec 2017 05:37:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6xNcz90jIDBeYETA//C+12kHSkmxRpkowI7VN/yQ/NU=; b=fhgfa6zN8YQJcf VnMJo2Hta5kh4l7BNQHnQoncVubNZsSvvV90qV1DOtn9y9Pmn2kTRPiT+CUEqfYfUtHDNv+C/kOwh UokYaq1y7aP/DN21WgpG38mRv+BdVeG1XKb0grKuoQ74LzCqL9g9l7mCaQ6jeMExEZnFPWWdo3xhV QrC4FPuzwCQxepcvsso6pSgXgsJa4mCnWLfkoXCQzrrVP0bi6kO+h0T6ZcJR6AYET/EmkQ76w/Wti pKteyIfoyCHLTFO/7uDhN7emleoJejs7cIM6+CiQm4ef2jPOuCdKYSt1ViLisf75eaQw3H0Mhe6P2 Nb7b+XV00giNau0dtnjA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1eMorp-00065G-80; Thu, 07 Dec 2017 05:37:05 +0000 Received: from mail-sn1nam01on0045.outbound.protection.outlook.com ([104.47.32.45] helo=NAM01-SN1-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1eMorA-0005PZ-Tj for linux-arm-kernel@lists.infradead.org; Thu, 07 Dec 2017 05:36:28 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector1-xilinx-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=22s67VcHwHIZaXmSJ2bnuX9DbSu1IHD5M8eUZY1nlUM=; b=EGcfQv++nnOsa7SEgQAAznPk2CHKcmXDxXSDPS0AxGeO/gRw2iDX6FVHukiR7IO54mhNpvl8FkyNKM1zn/ZYQZ1pNfsg6jX7OSUNHAhyqjgJ+TFVwWAwV+asgTHe+ch6JrmlKSzXIt4zLgr24ek3oFM409Y6dKtWR7mQ1JTZQc0= Received: from BLUPR0201CA0018.namprd02.prod.outlook.com (10.163.116.28) by BY2PR02MB1332.namprd02.prod.outlook.com (10.162.79.27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.302.9; Thu, 7 Dec 2017 05:35:58 +0000 Received: from SN1NAM02FT014.eop-nam02.prod.protection.outlook.com (2a01:111:f400:7e44::200) by BLUPR0201CA0018.outlook.office365.com (2a01:111:e400:52e7::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.302.9 via Frontend Transport; Thu, 7 Dec 2017 05:35:57 +0000 Authentication-Results: spf=pass (sender IP is 149.199.60.83) smtp.mailfrom=xilinx.com; vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=bestguesspass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.60.83 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.60.83; helo=xsj-pvapsmtpgw01; Received: from xsj-pvapsmtpgw01 (149.199.60.83) by SN1NAM02FT014.mail.protection.outlook.com (10.152.72.106) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.20.282.5 via Frontend Transport; Thu, 7 Dec 2017 05:35:57 +0000 Received: from unknown-38-66.xilinx.com ([149.199.38.66] helo=xsj-pvapsmtp01) by xsj-pvapsmtpgw01 with esmtp (Exim 4.63) (envelope-from ) id 1eMoqi-00049K-QU; Wed, 06 Dec 2017 21:35:56 -0800 Received: from [127.0.0.1] (helo=localhost) by xsj-pvapsmtp01 with smtp (Exim 4.63) (envelope-from ) id 1eMoqi-0003fB-N1; Wed, 06 Dec 2017 21:35:56 -0800 Received: from xsj-pvapsmtp01 (smtp3.xilinx.com [149.199.38.66]) by xsj-smtp-dlp1.xlnx.xilinx.com (8.13.8/8.13.1) with ESMTP id vB75ZriN013079; Wed, 6 Dec 2017 21:35:53 -0800 Received: from [172.23.37.82] (helo=xhdpunnaia40.xilinx.com) by xsj-pvapsmtp01 with esmtp (Exim 4.63) (envelope-from ) id 1eMoqe-0003aq-E2; Wed, 06 Dec 2017 21:35:52 -0800 From: Kedareswara rao Appana To: , , , , , , , , Subject: [PATCH v7 3/6] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario Date: Thu, 7 Dec 2017 10:51:04 +0530 Message-ID: <1512624067-13554-4-git-send-email-appanad@xilinx.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1512624067-13554-1-git-send-email-appanad@xilinx.com> References: <1512624067-13554-1-git-send-email-appanad@xilinx.com> X-RCIS-Action: ALLOW X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.1.0.1062-23512.006 X-TM-AS-User-Approved-Sender: Yes;Yes X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:149.199.60.83; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(39860400002)(376002)(346002)(2980300002)(438002)(189003)(199004)(77096006)(4326008)(110136005)(7416002)(36756003)(54906003)(16586007)(5660300001)(6666003)(2950100002)(305945005)(36386004)(39060400002)(7696005)(76176011)(106002)(81156014)(316002)(51416003)(63266004)(2906002)(48376002)(9786002)(50466002)(551934003)(478600001)(47776003)(8676002)(33646002)(8936002)(50226002)(81166006)(356003)(106466001)(2201001)(107986001); DIR:OUT; SFP:1101; SCL:1; SRVR:BY2PR02MB1332; H:xsj-pvapsmtpgw01; FPR:; SPF:Pass; PTR:unknown-60-83.xilinx.com; MX:1; A:1; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; SN1NAM02FT014; 1:oWlmFad4JnHiYicLTshR5l0kQupziaaq6AbZXEgvgK6UhNE0EK8GxKT3r5jF0imYfTRWQYw5TadJz7PLeSlvT3EEGFg/AE8u0YpFJWIZ98ZmYU0J+0Xi5WO2WZoWioj6 MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1925732c-4296-460f-3ba8-08d53d346482 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(5600026)(4604075)(4608076)(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(2017052603286); SRVR:BY2PR02MB1332; X-Microsoft-Exchange-Diagnostics: 1; BY2PR02MB1332; 3:jEFKeB6LxHtptfWTrpafYIB30VcRgjP/9xO6b2Uv6fxsDTXkqiazst0Fg09CdUFme61jy25F5bFPUNfKQeNGBMXAf71WiSrC2ZO39Tj6BLO3/O6aXXkRD+znV0mlO0S5sNLwoeWTV6EtQN+ztCi9CU6DS9Cw/1AjE76EDBxv+QMREN5ojWEPHEM8m3foAJuULlE2Dzdc1mZC0gn8mMTQKB3fo0Bok3WzrXifzkd6Eynt5op+6bvRemWwF01rWPTa9oqhS92sTHGBo8U/tNv+9+/gaZ8Q6BaJ+aRidk0Cg4lb4ZsSfkd3sJ50hmJ3ee/j4xdljEwNli9qzTyNWSaEDQ2ppc7VnydMb54QfZa7/Ls=; 25:MjsJaw8qWap5eyPlyLzQP45sOc3/T7EWVKq7/OGQ/C74hA+eC/9dUKxf5oFpgwwgvw+plhbOnrQiSxmd03OjcKhhsjU0pzn3IHnI+BI3LSLfpodU0Wc/SkBCXf94xQoX4SDcB1/M2d66LTN5NdpXE4sEWc4StlvtoOIBz1urLwegdO7aaPIdJ6nkTxHcpNfYf/1fz9n8YHK1VCHm37fLnCKLdAiAAa9YC16IUi+xS/iQv435PqTSAIPrhvQDWpyp4xfElrmjTBdN2lFUCtExHQgJXqwBYjbis96GuFW4sscCcLxzJimcpj+M1OR/HAbpo5FXTGPal1FHHE5aq6faBg== X-MS-TrafficTypeDiagnostic: BY2PR02MB1332: X-Microsoft-Exchange-Diagnostics: 1; BY2PR02MB1332; 31:QuaiC8r8gfPkJ90TO0JB9ATQjWkv239DiUsAfLJWdTluFYkZK04sgjeQHb/YjGjSDBYRxMwRulGSn0QVuJAw8O2uLE1BVMZrf0dVLD+qkKu1s9nJ7uZwCfgIgNdwlGzUAk2Q038iwgagK2n25eNaaHF1hAEqp7xYo2oaHyL9rPdlcK8Idnf6RWmnEY3DhL6+EWPe0Rnl8T0kMLBMuOe2MnYrIEHklBvcLzmiuqBT0lI=; 20:4XnlVEIa6GfHpU2H7n2I3b5P48+E+vBcyb1BHyBwV8K/kGKmEbvKqYXa9mVGqdSJXM87Xym/aez/Rkd1EeHq5PmaNzc0WHVZQKlbNn6k1sr9f1cNB2Ao2vS4aKrb5Pf/PcyuvPaL59XJ8v7z2J+S935suWLdefrpffCJI6ED53GiK22vWiab6ZNeQUo47O9Skvq1w9ZN1x8o+q1RnjODmU+wL7ZHETkL2VvDQe2o+JR1hUkO79QJb4RxKUNG5i7R+tVLt/Ui+vMBqz2U+G0NyUWpiZacbITH5juXaYiv4AR2RHAC55EunU1NZ9AdFlOXrhuv4N2aXm8IYz1bS2+qKNN8lz6ibqv9dXNmv19jeO6Dqq0ezJ1ttU9V9fTxHwzI/A6IvIu0zATSLIDywaI3KSpEDtrB7LB6CboGWU/ZMIFVmcouwNPi9LM2OrfujUlyHnLzELPR7xSPeax3tCf0do3gJ5xJQrw8Uxby5aDyYd5Vg8LquidOaaKaHC2uWVwB X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(192813158149592); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040450)(2401047)(8121501046)(5005006)(93006095)(93004095)(3231022)(10201501046)(3002001)(6055026)(6041248)(20161123560025)(20161123555025)(20161123564025)(20161123562025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123558100)(6072148)(201708071742011); SRVR:BY2PR02MB1332; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:BY2PR02MB1332; X-Microsoft-Exchange-Diagnostics: 1; BY2PR02MB1332; 4:mxPF+qz62g2keMhrBuFYi3S3F8zkIVI1Iv+sQeiO3bwmNK+NB3Fy0XmHhEE3YIlQlafAH7D7wMA9OT2M6OZIlYpK3v+S9v7o9nyBGiObUIpgwtsuuidUeXbiRhz2R/WRu96iXYuKV9gOESMLFSgmFgkEqnkezl+/bVUxCA112os39p5W1LPZLQlAhTT4kCsADtKS1isGYyE2SXQMDDK9Tn43tLHefeq5zv/Ws/T4w0UmXbtiT2ByrBGII5NAhr9l3BIGSxvArK2EVRDy20tJpXmWmSVeRcK8gmGyRPgpOOVv4I/DefYKF32MM0ckEGGO X-Forefront-PRVS: 05143A8241 X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BY2PR02MB1332; 23:T/tOmUj3SJTVf+1xa32/RqP2qJGLqSpkagufbNXZ4?= =?us-ascii?Q?AQfMaGjCK0OI8T383S4llqWZX7CbCwKjbKjIGbw+X7XVzohVVFTbUl6UJPjW?= =?us-ascii?Q?NyTBawhnsXZ7liIr5OQLljJ0PwGe4mFY3i9g/X/LPhJV/eAdXS5xtfkSMT0X?= =?us-ascii?Q?+Hp0SylUe0x+obDR4/2E1uvWHZCJFrFg9j/eoj9vSaWYv+5/MTfAppWryK/K?= =?us-ascii?Q?BqOmuJEaNgExmOvwSd/JYuiyL/RA84IoSdAp9bjEgP+0oMNhTU4S5kUyAZGL?= =?us-ascii?Q?4qnLTPDRCBNsnz+JN1vvc5QZBwj8JaH5/KyfAsah6W4HaStH5ra38qsTKP7u?= =?us-ascii?Q?wucph1gJhU8hLe9HEzN+m+hCiBJckFn+zmOx3TDHX/4AbUwKVEGQwFakiXss?= =?us-ascii?Q?HURZzvG6x+n6nvNvMdqYFoHbxTXzLxeGEkVPd0qCCjf42BuIUQSiwKIb09yH?= =?us-ascii?Q?PUER4cRt4Tpg61XlJUpcmPXrPwoJE93916g7gZIK9B5kqebRD7LBT42mqlB/?= =?us-ascii?Q?Zihf+xrwaMn7QUsO0Yhy7EaqMHExSXYL2N197CEWxOy9eAFqXyCw9YU9LZQN?= =?us-ascii?Q?wBxQZBR9VoCzDcd5Nc8AfUwteWVHUfVdGcb4tSp+/dIZ8Qab1Hl2WPWA1BzT?= =?us-ascii?Q?Hc1+QVhTQkus7HQ3Fe2bf+MV/ajM0G0ifkcLXD1PrOwXscTZni2yxgjI16h2?= =?us-ascii?Q?thy9zshjZzIOmAM3o5xiG4mSlmMPfSgvjlaxVYYuX7pRzVJf6mGo0v9dcDoC?= =?us-ascii?Q?LLd78Eg+XVTPrc0nPPToTCW/kV0y8RsmepAnqOCbft7XreWBvEujwT8fyG+k?= =?us-ascii?Q?iqm8WrEQxMUGKMN8a1PUfw8QCjv8i8o7i6WKafvXhc3jqaX0DnpDHgQbXafk?= =?us-ascii?Q?5s1GPAtLUHeTniFS84Mizvx0y7tK1hYgcjWjW6CWNhoIqIRXGIYolZuipby/?= =?us-ascii?Q?iNg88ULmjNJEkLjnR+9ToGiiYbSzjfyW0EboIFA/YiL5LJo2YW/ob4Y2xW5I?= =?us-ascii?Q?l0XkBiLpqV7zAjEzk9n8hv1?= X-Microsoft-Exchange-Diagnostics: 1; BY2PR02MB1332; 6:c8p5WrBvp0daOAW4318BEDDESlgQZLrvGPwcW2x9nOw9XeACRap52zXWGD3V1skLmHxnmNK+nA/SHfBP0wrkuhLnuD2Yt5s6wdRt+Sa+JbHqlcEeqgJG3rfHxCayhHqhV23ycrVO5/75iUPQjLKZob8Am/ZscXRcwQnUi1SUpmLo//EBglNzyE0TCuRzQCuiKlW5tXF80RWCOOGZDxKeC7yCywnuUSHwc/5mUzmxsco1LgEPi0STYAqwEOhahBvLqAZgNYqhxmrUixtW5kbznNLnOzA7xQk6/SbPGZW4LdGMIQwK8GTwh3Na0j1DADgo4vUQgj24cUpFScNyRkG1ym+KHKjQWT+PVlEL22cl8BQ=; 5:Tvf0u4eHxaRB09RWoO6lvXy+s23ZKfVHT7eExBK5YQEYhkyijyF+qDNGEE7OOmboiJrdCDRaWmW13o9ruwxNyqDMERGsR8XIGuqlL5kq3Z1KbCq69Qnta0qVMwDK/GMmD08RFny2ZPr44LT20s0Y/lwQKLwM3FCmYfSpH7fC0SI=; 24:Vz4D/3Eu4ZP+BSfCCVgeZAbQiZiQJ1y7+AnCYoKvPuNC+yYp0uulbrsIbhhOS+C0gYeyWz1HauzndO/Q783mOl2Jdjgf3D4N1g8YlGVxA60=; 7:OSHHlpoVuYQqw+/JvS2bQ7SzyOreVwRBitMbpJ8rn6k5NrlDO/EDzTFody/BUe1SS6kAtq0PrNICDTSEkE8ez9pYsj6VZcwhsZHtaPrTJylOYRjqwh9Etg/U35UtKYKsIHk+DnC5waWKphl7laS/ReA9mPmB9uN9JvhkniM7KMxuVIOGwEOP3lebZmJsYEaAaczLenLBcHo1BtJaZZOeqj0RNkxezCTA8EcXtpW2Lu81+SKhWByJmEd+dz6dyKFW SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Dec 2017 05:35:57.1955 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1925732c-4296-460f-3ba8-08d53d346482 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.60.83]; Helo=[xsj-pvapsmtpgw01] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY2PR02MB1332 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20171206_213625_151685_563DDC7A X-CRM114-Status: GOOD ( 16.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP As per axi dmaengine spec the software must not move the tail pointer to a location that has not been updated (next descriptor field of the h/w descriptor should always point to a valid address). When user submits multiple descriptors on the recv side, with the current driver flow the last buffer descriptor next descriptor field points to a invalid location, resulting the invalid data or errors from the axidma dmaengine. This patch fixes this issue by creating a buffer descritpor chain during channel allocation itself and use those buffer descriptors for the subsequent dma operations. Signed-off-by: Kedareswara rao Appana --- Changes for v7: ---> None. Changes for v6: ---> Updated Commit message as suggested by Vinod. Changes for v5: ---> None. Changes for v4: ---> None. Changes for v3: ---> None. Changes for v2: ---> None. drivers/dma/xilinx/xilinx_dma.c | 135 +++++++++++++++++++++++++--------------- 1 file changed, 84 insertions(+), 51 deletions(-) diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c index 9063ca0..ab01306 100644 --- a/drivers/dma/xilinx/xilinx_dma.c +++ b/drivers/dma/xilinx/xilinx_dma.c @@ -165,6 +165,7 @@ #define XILINX_DMA_BD_SOP BIT(27) #define XILINX_DMA_BD_EOP BIT(26) #define XILINX_DMA_COALESCE_MAX 255 +#define XILINX_DMA_NUM_DESCS 255 #define XILINX_DMA_NUM_APP_WORDS 5 /* Multi-Channel DMA Descriptor offsets*/ @@ -312,6 +313,7 @@ struct xilinx_dma_tx_descriptor { * @pending_list: Descriptors waiting * @active_list: Descriptors ready to submit * @done_list: Complete descriptors + * @free_seg_list: Free descriptors * @common: DMA common channel * @desc_pool: Descriptors pool * @dev: The dma device @@ -332,7 +334,9 @@ struct xilinx_dma_tx_descriptor { * @desc_submitcount: Descriptor h/w submitted count * @residue: Residue for AXI DMA * @seg_v: Statically allocated segments base + * @seg_p: Physical allocated segments base * @cyclic_seg_v: Statically allocated segment base for cyclic transfers + * @cyclic_seg_p: Physical allocated segments base for cyclic dma * @start_transfer: Differentiate b/w DMA IP's transfer * @stop_transfer: Differentiate b/w DMA IP's quiesce */ @@ -344,6 +348,7 @@ struct xilinx_dma_chan { struct list_head pending_list; struct list_head active_list; struct list_head done_list; + struct list_head free_seg_list; struct dma_chan common; struct dma_pool *desc_pool; struct device *dev; @@ -364,7 +369,9 @@ struct xilinx_dma_chan { u32 desc_submitcount; u32 residue; struct xilinx_axidma_tx_segment *seg_v; + dma_addr_t seg_p; struct xilinx_axidma_tx_segment *cyclic_seg_v; + dma_addr_t cyclic_seg_p; void (*start_transfer)(struct xilinx_dma_chan *chan); int (*stop_transfer)(struct xilinx_dma_chan *chan); u16 tdest; @@ -584,18 +591,32 @@ xilinx_cdma_alloc_tx_segment(struct xilinx_dma_chan *chan) static struct xilinx_axidma_tx_segment * xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan *chan) { - struct xilinx_axidma_tx_segment *segment; - dma_addr_t phys; - - segment = dma_pool_zalloc(chan->desc_pool, GFP_ATOMIC, &phys); - if (!segment) - return NULL; + struct xilinx_axidma_tx_segment *segment = NULL; + unsigned long flags; - segment->phys = phys; + spin_lock_irqsave(&chan->lock, flags); + if (!list_empty(&chan->free_seg_list)) { + segment = list_first_entry(&chan->free_seg_list, + struct xilinx_axidma_tx_segment, + node); + list_del(&segment->node); + } + spin_unlock_irqrestore(&chan->lock, flags); return segment; } +static void xilinx_dma_clean_hw_desc(struct xilinx_axidma_desc_hw *hw) +{ + u32 next_desc = hw->next_desc; + u32 next_desc_msb = hw->next_desc_msb; + + memset(hw, 0, sizeof(struct xilinx_axidma_desc_hw)); + + hw->next_desc = next_desc; + hw->next_desc_msb = next_desc_msb; +} + /** * xilinx_dma_free_tx_segment - Free transaction segment * @chan: Driver specific DMA channel @@ -604,7 +625,9 @@ xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan *chan) static void xilinx_dma_free_tx_segment(struct xilinx_dma_chan *chan, struct xilinx_axidma_tx_segment *segment) { - dma_pool_free(chan->desc_pool, segment, segment->phys); + xilinx_dma_clean_hw_desc(&segment->hw); + + list_add_tail(&segment->node, &chan->free_seg_list); } /** @@ -729,16 +752,26 @@ static void xilinx_dma_free_descriptors(struct xilinx_dma_chan *chan) static void xilinx_dma_free_chan_resources(struct dma_chan *dchan) { struct xilinx_dma_chan *chan = to_xilinx_chan(dchan); + unsigned long flags; dev_dbg(chan->dev, "Free all channel resources.\n"); xilinx_dma_free_descriptors(chan); + if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) { - xilinx_dma_free_tx_segment(chan, chan->cyclic_seg_v); - xilinx_dma_free_tx_segment(chan, chan->seg_v); + spin_lock_irqsave(&chan->lock, flags); + INIT_LIST_HEAD(&chan->free_seg_list); + spin_unlock_irqrestore(&chan->lock, flags); + + /* Free Memory that is allocated for cyclic DMA Mode */ + dma_free_coherent(chan->dev, sizeof(*chan->cyclic_seg_v), + chan->cyclic_seg_v, chan->cyclic_seg_p); + } + + if (chan->xdev->dma_config->dmatype != XDMA_TYPE_AXIDMA) { + dma_pool_destroy(chan->desc_pool); + chan->desc_pool = NULL; } - dma_pool_destroy(chan->desc_pool); - chan->desc_pool = NULL; } /** @@ -821,6 +854,7 @@ static void xilinx_dma_do_tasklet(unsigned long data) static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) { struct xilinx_dma_chan *chan = to_xilinx_chan(dchan); + int i; /* Has this channel already been allocated? */ if (chan->desc_pool) @@ -831,11 +865,30 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) * for meeting Xilinx VDMA specification requirement. */ if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) { - chan->desc_pool = dma_pool_create("xilinx_dma_desc_pool", - chan->dev, - sizeof(struct xilinx_axidma_tx_segment), - __alignof__(struct xilinx_axidma_tx_segment), - 0); + /* Allocate the buffer descriptors. */ + chan->seg_v = dma_zalloc_coherent(chan->dev, + sizeof(*chan->seg_v) * + XILINX_DMA_NUM_DESCS, + &chan->seg_p, GFP_KERNEL); + if (!chan->seg_v) { + dev_err(chan->dev, + "unable to allocate channel %d descriptors\n", + chan->id); + return -ENOMEM; + } + + for (i = 0; i < XILINX_DMA_NUM_DESCS; i++) { + chan->seg_v[i].hw.next_desc = + lower_32_bits(chan->seg_p + sizeof(*chan->seg_v) * + ((i + 1) % XILINX_DMA_NUM_DESCS)); + chan->seg_v[i].hw.next_desc_msb = + upper_32_bits(chan->seg_p + sizeof(*chan->seg_v) * + ((i + 1) % XILINX_DMA_NUM_DESCS)); + chan->seg_v[i].phys = chan->seg_p + + sizeof(*chan->seg_v) * i; + list_add_tail(&chan->seg_v[i].node, + &chan->free_seg_list); + } } else if (chan->xdev->dma_config->dmatype == XDMA_TYPE_CDMA) { chan->desc_pool = dma_pool_create("xilinx_cdma_desc_pool", chan->dev, @@ -850,7 +903,8 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) 0); } - if (!chan->desc_pool) { + if (!chan->desc_pool && + (chan->xdev->dma_config->dmatype != XDMA_TYPE_AXIDMA)) { dev_err(chan->dev, "unable to allocate channel %d descriptor pool\n", chan->id); @@ -859,22 +913,20 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) { /* - * For AXI DMA case after submitting a pending_list, keep - * an extra segment allocated so that the "next descriptor" - * pointer on the tail descriptor always points to a - * valid descriptor, even when paused after reaching taildesc. - * This way, it is possible to issue additional - * transfers without halting and restarting the channel. - */ - chan->seg_v = xilinx_axidma_alloc_tx_segment(chan); - - /* * For cyclic DMA mode we need to program the tail Descriptor * register with a value which is not a part of the BD chain * so allocating a desc segment during channel allocation for * programming tail descriptor. */ - chan->cyclic_seg_v = xilinx_axidma_alloc_tx_segment(chan); + chan->cyclic_seg_v = dma_zalloc_coherent(chan->dev, + sizeof(*chan->cyclic_seg_v), + &chan->cyclic_seg_p, GFP_KERNEL); + if (!chan->cyclic_seg_v) { + dev_err(chan->dev, + "unable to allocate desc segment for cyclic DMA\n"); + return -ENOMEM; + } + chan->cyclic_seg_v->phys = chan->cyclic_seg_p; } dma_cookie_init(dchan); @@ -1184,7 +1236,7 @@ static void xilinx_cdma_start_transfer(struct xilinx_dma_chan *chan) static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan) { struct xilinx_dma_tx_descriptor *head_desc, *tail_desc; - struct xilinx_axidma_tx_segment *tail_segment, *old_head, *new_head; + struct xilinx_axidma_tx_segment *tail_segment; u32 reg; if (chan->err) @@ -1203,21 +1255,6 @@ static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan) tail_segment = list_last_entry(&tail_desc->segments, struct xilinx_axidma_tx_segment, node); - if (chan->has_sg && !chan->xdev->mcdma) { - old_head = list_first_entry(&head_desc->segments, - struct xilinx_axidma_tx_segment, node); - new_head = chan->seg_v; - /* Copy Buffer Descriptor fields. */ - new_head->hw = old_head->hw; - - /* Swap and save new reserve */ - list_replace_init(&old_head->node, &new_head->node); - chan->seg_v = old_head; - - tail_segment->hw.next_desc = chan->seg_v->phys; - head_desc->async_tx.phys = new_head->phys; - } - reg = dma_ctrl_read(chan, XILINX_DMA_REG_DMACR); if (chan->desc_pendingcount <= XILINX_DMA_COALESCE_MAX) { @@ -1705,7 +1742,7 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg( { struct xilinx_dma_chan *chan = to_xilinx_chan(dchan); struct xilinx_dma_tx_descriptor *desc; - struct xilinx_axidma_tx_segment *segment = NULL, *prev = NULL; + struct xilinx_axidma_tx_segment *segment = NULL; u32 *app_w = (u32 *)context; struct scatterlist *sg; size_t copy; @@ -1756,10 +1793,6 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg( XILINX_DMA_NUM_APP_WORDS); } - if (prev) - prev->hw.next_desc = segment->phys; - - prev = segment; sg_used += copy; /* @@ -1773,7 +1806,6 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg( segment = list_first_entry(&desc->segments, struct xilinx_axidma_tx_segment, node); desc->async_tx.phys = segment->phys; - prev->hw.next_desc = segment->phys; /* For the last DMA_MEM_TO_DEV transfer, set EOP */ if (chan->direction == DMA_MEM_TO_DEV) { @@ -2328,6 +2360,7 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev, INIT_LIST_HEAD(&chan->pending_list); INIT_LIST_HEAD(&chan->done_list); INIT_LIST_HEAD(&chan->active_list); + INIT_LIST_HEAD(&chan->free_seg_list); /* Retrieve the channel properties from the device tree */ has_dre = of_property_read_bool(node, "xlnx,include-dre");