From patchwork Wed Aug 12 07:05:43 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 6996431 X-Patchwork-Delegate: dan.j.williams@gmail.com Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id E168AC05AC for ; Wed, 12 Aug 2015 07:09:52 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0F30520727 for ; Wed, 12 Aug 2015 07:09:52 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2B3DF2071E for ; Wed, 12 Aug 2015 07:09:51 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 1D771182961; Wed, 12 Aug 2015 00:09:51 -0700 (PDT) X-Original-To: linux-nvdimm@ml01.01.org Delivered-To: linux-nvdimm@ml01.01.org Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2001:1868:205::9]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id E9528182959 for ; Wed, 12 Aug 2015 00:09:49 -0700 (PDT) Received: from p5de57192.dip0.t-ipconnect.de ([93.229.113.146] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZPQAX-0001r4-57; Wed, 12 Aug 2015 07:09:49 +0000 From: Christoph Hellwig To: torvalds@linux-foundation.org, axboe@kernel.dk Subject: [PATCH 24/31] xtensa: handle page-less SG entries Date: Wed, 12 Aug 2015 09:05:43 +0200 Message-Id: <1439363150-8661-25-git-send-email-hch@lst.de> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1439363150-8661-1-git-send-email-hch@lst.de> References: <1439363150-8661-1-git-send-email-hch@lst.de> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html Cc: linux-mips@linux-mips.org, linux-ia64@vger.kernel.org, linux-nvdimm@ml01.01.org, dhowells@redhat.com, sparclinux@vger.kernel.org, egtvedt@samfundet.no, linux-arch@vger.kernel.org, linux-s390@vger.kernel.org, x86@kernel.org, dwmw2@infradead.org, hskinnemoen@gmail.com, linux-xtensa@linux-xtensa.org, grundler@parisc-linux.org, realmz6@gmail.com, alex.williamson@redhat.com, linux-metag@vger.kernel.org, monstr@monstr.eu, linux-parisc@vger.kernel.org, vgupta@synopsys.com, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-media@vger.kernel.org, linuxppc-dev@lists.ozlabs.org X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Make all cache invalidation conditional on sg_has_page(). Signed-off-by: Christoph Hellwig --- arch/xtensa/include/asm/dma-mapping.h | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/arch/xtensa/include/asm/dma-mapping.h b/arch/xtensa/include/asm/dma-mapping.h index 1f5f6dc..262a1d1 100644 --- a/arch/xtensa/include/asm/dma-mapping.h +++ b/arch/xtensa/include/asm/dma-mapping.h @@ -61,10 +61,9 @@ dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, BUG_ON(direction == DMA_NONE); for_each_sg(sglist, sg, nents, i) { - BUG_ON(!sg_page(sg)); - sg->dma_address = sg_phys(sg); - consistent_sync(sg_virt(sg), sg->length, direction); + if (sg_has_page(sg)) + consistent_sync(sg_virt(sg), sg->length, direction); } return nents; @@ -131,8 +130,10 @@ dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, int nelems, int i; struct scatterlist *sg; - for_each_sg(sglist, sg, nelems, i) - consistent_sync(sg_virt(sg), sg->length, dir); + for_each_sg(sglist, sg, nelems, i) { + if (sg_has_page(sg)) + consistent_sync(sg_virt(sg), sg->length, dir); + } } static inline void @@ -142,8 +143,10 @@ dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist, int i; struct scatterlist *sg; - for_each_sg(sglist, sg, nelems, i) - consistent_sync(sg_virt(sg), sg->length, dir); + for_each_sg(sglist, sg, nelems, i) { + if (sg_has_page(sg)) + consistent_sync(sg_virt(sg), sg->length, dir); + } } static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)