From patchwork Fri Nov 21 15:05:56 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pranith Kumar X-Patchwork-Id: 5355551 Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 944FA9F1E1 for ; Fri, 21 Nov 2014 15:09:54 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id AFF7720160 for ; Fri, 21 Nov 2014 15:09:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C7E622015D for ; Fri, 21 Nov 2014 15:09:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758306AbaKUPG6 (ORCPT ); Fri, 21 Nov 2014 10:06:58 -0500 Received: from mail-ie0-f172.google.com ([209.85.223.172]:61321 "EHLO mail-ie0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758358AbaKUPGz (ORCPT ); Fri, 21 Nov 2014 10:06:55 -0500 Received: by mail-ie0-f172.google.com with SMTP id ar1so5043902iec.31 for ; Fri, 21 Nov 2014 07:06:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:subject:date:message-id:in-reply-to:references; bh=CBXv1O+USBtZ/zY2AqIyvQPVTUrnRc/eWJxl2nA4H8w=; b=lmQ9JYgpxEWO5UNr4XBlQOPyaEkQF+0D4JMi0+4iLyGci6/ydMNAes28UIR4TCsDjC 73zo2xX2DAaZmQ+LqyEB8Z2FmZJZi3UTm7CwqTF/Qask07CRvGcqGGUZKFiIBp1k0Jso 6yjdNsIFoY8yzd48A+myda56AC1/HFj+PZziBoI+ekd2s+hXJfpHNtBBJC/0XnJV4ipq MJxgpt1lgKdneSqQHuyTg7hfsp5fmBbtjOb0+eae5fAtDUZW1XgtsHv6JM0zHVKQ+KsP FCD2hPIuPOYzINwb51frYlYU4zWzt0URpBY8BmXRpq+i/zJl/d8Xj6Y6jVJhkeVtp08e Z4yw== X-Received: by 10.50.134.225 with SMTP id pn1mr4163926igb.33.1416582414249; Fri, 21 Nov 2014 07:06:54 -0800 (PST) Received: from evgadesktop.attlocal.net (108-232-152-155.lightspeed.tukrga.sbcglobal.net. [108.232.152.155]) by mx.google.com with ESMTPSA id jd6sm3361032igb.16.2014.11.21.07.06.52 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 21 Nov 2014 07:06:53 -0800 (PST) From: Pranith Kumar To: Vinod Koul , Dan Williams , Thomas Gleixner , "David S. Miller" , =?UTF-8?q?Manuel=20Sch=C3=B6lling?= , Josh Triplett , Rashika , dmaengine@vger.kernel.org (open list:DMA GENERIC OFFLO...), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 2/9] drivers: dma: Replace smp_read_barrier_depends() with lockless_dereference() Date: Fri, 21 Nov 2014 10:05:56 -0500 Message-Id: <1416582363-20661-3-git-send-email-bobby.prani@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1416582363-20661-1-git-send-email-bobby.prani@gmail.com> References: <1416582363-20661-1-git-send-email-bobby.prani@gmail.com> Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Recently lockless_dereference() was added which can be used in place of hard-coding smp_read_barrier_depends(). The following PATCH makes the change. Signed-off-by: Pranith Kumar --- drivers/dma/ioat/dma_v2.c | 3 +-- drivers/dma/ioat/dma_v3.c | 3 +-- 2 files changed, 2 insertions(+), 4 deletions(-) diff --git a/drivers/dma/ioat/dma_v2.c b/drivers/dma/ioat/dma_v2.c index 695483e..0f94d72 100644 --- a/drivers/dma/ioat/dma_v2.c +++ b/drivers/dma/ioat/dma_v2.c @@ -142,10 +142,9 @@ static void __cleanup(struct ioat2_dma_chan *ioat, dma_addr_t phys_complete) active = ioat2_ring_active(ioat); for (i = 0; i < active && !seen_current; i++) { - smp_read_barrier_depends(); prefetch(ioat2_get_ring_ent(ioat, idx + i + 1)); desc = ioat2_get_ring_ent(ioat, idx + i); - tx = &desc->txd; + tx = &lockless_dereference(desc)->txd; dump_desc_dbg(ioat, desc); if (tx->cookie) { dma_descriptor_unmap(tx); diff --git a/drivers/dma/ioat/dma_v3.c b/drivers/dma/ioat/dma_v3.c index 895f869..cbd0537 100644 --- a/drivers/dma/ioat/dma_v3.c +++ b/drivers/dma/ioat/dma_v3.c @@ -389,7 +389,6 @@ static void __cleanup(struct ioat2_dma_chan *ioat, dma_addr_t phys_complete) for (i = 0; i < active && !seen_current; i++) { struct dma_async_tx_descriptor *tx; - smp_read_barrier_depends(); prefetch(ioat2_get_ring_ent(ioat, idx + i + 1)); desc = ioat2_get_ring_ent(ioat, idx + i); dump_desc_dbg(ioat, desc); @@ -398,7 +397,7 @@ static void __cleanup(struct ioat2_dma_chan *ioat, dma_addr_t phys_complete) if (device->cap & IOAT_CAP_DWBES) desc_get_errstat(ioat, desc); - tx = &desc->txd; + tx = &lockless_dereference(desc)->txd; if (tx->cookie) { dma_cookie_complete(tx); dma_descriptor_unmap(tx);