From patchwork Tue Jan 29 12:27:18 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hiroshi DOYU X-Patchwork-Id: 2073781 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork2.kernel.org (Postfix) with ESMTP id 926D4DF2E5 for ; Thu, 31 Jan 2013 13:08:34 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1U0tqS-0004cR-Kv; Thu, 31 Jan 2013 13:06:24 +0000 Received: from hqemgate04.nvidia.com ([216.228.121.35]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1U0AHn-00033G-On for linux-arm-kernel@lists.infradead.org; Tue, 29 Jan 2013 12:27:36 +0000 Received: from hqnvupgp07.nvidia.com (Not Verified[216.228.121.13]) by hqemgate04.nvidia.com id ; Tue, 29 Jan 2013 04:27:20 -0800 Received: from hqemhub02.nvidia.com ([172.17.108.22]) by hqnvupgp07.nvidia.com (PGP Universal service); Tue, 29 Jan 2013 04:27:14 -0800 X-PGP-Universal: processed; by hqnvupgp07.nvidia.com on Tue, 29 Jan 2013 04:27:14 -0800 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by hqemhub02.nvidia.com (172.20.150.31) with Microsoft SMTP Server id 8.3.297.1; Tue, 29 Jan 2013 04:27:32 -0800 Received: from thelma.nvidia.com (Not Verified[172.16.212.77]) by hqnvemgw02.nvidia.com with MailMarshal (v6,7,2,8378) id ; Tue, 29 Jan 2013 04:27:32 -0800 Received: from oreo.Nvidia.com (dhcp-10-21-25-186.nvidia.com [10.21.25.186]) by thelma.nvidia.com (8.13.8+Sun/8.8.8) with ESMTP id r0TCRTZ5023022; Tue, 29 Jan 2013 04:27:30 -0800 (PST) From: Hiroshi Doyu To: Subject: [v2 1/1] ARM: dma-mapping: Call arm_dma_set_mask() if no ->set_dma_mask() Date: Tue, 29 Jan 2013 14:27:18 +0200 Message-ID: <1359462438-21006-1-git-send-email-hdoyu@nvidia.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <5107B301.2060802@samsung.com> References: <5107B301.2060802@samsung.com> MIME-Version: 1.0 X-Bad-Reply: References and In-Reply-To but no 'Re:' in Subject. X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130129_072735_943403_24BB07DD X-CRM114-Status: GOOD ( 11.55 ) X-Spam-Score: -7.6 (-------) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-7.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [216.228.121.35 listed in list.dnswl.org] -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 SPF_PASS SPF: sender matches SPF record -0.7 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] X-Mailman-Approved-At: Thu, 31 Jan 2013 07:54:32 -0500 Cc: linaro-mm-sig@lists.linaro.org, linux-tegra@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Hiroshi Doyu X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org struct dma_map_ops iommu_ops doesn't have ->set_dma_mask, which causes crash when dma_set_mask() is called from some driver. Signed-off-by: Hiroshi Doyu --- arch/arm/include/asm/dma-mapping.h | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h index 5b579b9..63cc49c 100644 --- a/arch/arm/include/asm/dma-mapping.h +++ b/arch/arm/include/asm/dma-mapping.h @@ -14,6 +14,7 @@ #define DMA_ERROR_CODE (~0) extern struct dma_map_ops arm_dma_ops; extern struct dma_map_ops arm_coherent_dma_ops; +extern int arm_dma_set_mask(struct device *dev, u64 dma_mask); static inline struct dma_map_ops *get_dma_ops(struct device *dev) { @@ -32,7 +33,13 @@ static inline void set_dma_ops(struct device *dev, struct dma_map_ops *ops) static inline int dma_set_mask(struct device *dev, u64 mask) { - return get_dma_ops(dev)->set_dma_mask(dev, mask); + struct dma_map_ops *ops = get_dma_ops(dev); + BUG_ON(!ops); + + if (ops->set_dma_mask) + return ops->set_dma_mask(dev, mask); + + return arm_dma_set_mask(dev, mask); } #ifdef __arch_page_to_dma @@ -112,8 +119,6 @@ static inline void dma_free_noncoherent(struct device *dev, size_t size, extern int dma_supported(struct device *dev, u64 mask); -extern int arm_dma_set_mask(struct device *dev, u64 dma_mask); - /** * arm_dma_alloc - allocate consistent memory for DMA * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices