From patchwork Mon Feb 23 04:50:27 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuval Shaia X-Patchwork-Id: 5865391 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id E7C929F373 for ; Mon, 23 Feb 2015 12:52:34 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id EAF0B20544 for ; Mon, 23 Feb 2015 12:52:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A045D20430 for ; Mon, 23 Feb 2015 12:52:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752032AbbBWMwb (ORCPT ); Mon, 23 Feb 2015 07:52:31 -0500 Received: from userp1040.oracle.com ([156.151.31.81]:44728 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751642AbbBWMwb (ORCPT ); Mon, 23 Feb 2015 07:52:31 -0500 Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id t1NCqU8j022439 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Mon, 23 Feb 2015 12:52:30 GMT Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85]) by aserv0021.oracle.com (8.13.8/8.13.8) with ESMTP id t1NCqT3x006836 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL) for ; Mon, 23 Feb 2015 12:52:29 GMT Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14]) by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id t1NCqTAA014883 for ; Mon, 23 Feb 2015 12:52:29 GMT Received: from yuval-net-srv-ca.us.oracle.com (/10.211.3.204) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 23 Feb 2015 04:52:28 -0800 From: Yuval Shaia To: yuval.shaia@oracle.com, linux-rdma@vger.kernel.org Subject: [PATCH] IB/verbs: Check each operation of dma_ops individually Date: Sun, 22 Feb 2015 20:50:27 -0800 Message-Id: <1424667027-8790-1-git-send-email-yuval.shaia@oracle.com> X-Mailer: git-send-email 1.7.1 X-Source-IP: aserv0021.oracle.com [141.146.126.233] Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-5.4 required=5.0 tests=BAYES_00, DATE_IN_PAST_06_12, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Current approach force one to implement all ops even when some functions can use the default implementation. As a result, for new DMA ops (e.x new arch) many functions just wrap the default function. The fix is to check each DMA operation individually so one can leave empty the ones not need to be override. Signed-off-by: Yuval Shaia --- include/rdma/ib_verbs.h | 22 +++++++++++----------- 1 files changed, 11 insertions(+), 11 deletions(-) diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 0d74f1d..166c01a 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -2145,7 +2145,7 @@ struct ib_mr *ib_get_dma_mr(struct ib_pd *pd, int mr_access_flags); */ static inline int ib_dma_mapping_error(struct ib_device *dev, u64 dma_addr) { - if (dev->dma_ops) + if (dev->dma_ops && dev->dma_ops->mapping_error) return dev->dma_ops->mapping_error(dev, dma_addr); return dma_mapping_error(dev->dma_device, dma_addr); } @@ -2161,7 +2161,7 @@ static inline u64 ib_dma_map_single(struct ib_device *dev, void *cpu_addr, size_t size, enum dma_data_direction direction) { - if (dev->dma_ops) + if (dev->dma_ops && dev->dma_ops->map_single) return dev->dma_ops->map_single(dev, cpu_addr, size, direction); return dma_map_single(dev->dma_device, cpu_addr, size, direction); } @@ -2177,7 +2177,7 @@ static inline void ib_dma_unmap_single(struct ib_device *dev, u64 addr, size_t size, enum dma_data_direction direction) { - if (dev->dma_ops) + if (dev->dma_ops && dev->dma_ops->unmap_single) dev->dma_ops->unmap_single(dev, addr, size, direction); else dma_unmap_single(dev->dma_device, addr, size, direction); @@ -2215,7 +2215,7 @@ static inline u64 ib_dma_map_page(struct ib_device *dev, size_t size, enum dma_data_direction direction) { - if (dev->dma_ops) + if (dev->dma_ops && dev->dma_ops->map_page) return dev->dma_ops->map_page(dev, page, offset, size, direction); return dma_map_page(dev->dma_device, page, offset, size, direction); } @@ -2231,7 +2231,7 @@ static inline void ib_dma_unmap_page(struct ib_device *dev, u64 addr, size_t size, enum dma_data_direction direction) { - if (dev->dma_ops) + if (dev->dma_ops && dev->dma_ops->unmap_page) dev->dma_ops->unmap_page(dev, addr, size, direction); else dma_unmap_page(dev->dma_device, addr, size, direction); @@ -2248,7 +2248,7 @@ static inline int ib_dma_map_sg(struct ib_device *dev, struct scatterlist *sg, int nents, enum dma_data_direction direction) { - if (dev->dma_ops) + if (dev->dma_ops && dev->dma_ops->map_sg) return dev->dma_ops->map_sg(dev, sg, nents, direction); return dma_map_sg(dev->dma_device, sg, nents, direction); } @@ -2264,7 +2264,7 @@ static inline void ib_dma_unmap_sg(struct ib_device *dev, struct scatterlist *sg, int nents, enum dma_data_direction direction) { - if (dev->dma_ops) + if (dev->dma_ops && dev->dma_ops->unmap_sg) dev->dma_ops->unmap_sg(dev, sg, nents, direction); else dma_unmap_sg(dev->dma_device, sg, nents, direction); @@ -2325,7 +2325,7 @@ static inline void ib_dma_sync_single_for_cpu(struct ib_device *dev, size_t size, enum dma_data_direction dir) { - if (dev->dma_ops) + if (dev->dma_ops && dev->dma_ops->sync_single_for_cpu) dev->dma_ops->sync_single_for_cpu(dev, addr, size, dir); else dma_sync_single_for_cpu(dev->dma_device, addr, size, dir); @@ -2343,7 +2343,7 @@ static inline void ib_dma_sync_single_for_device(struct ib_device *dev, size_t size, enum dma_data_direction dir) { - if (dev->dma_ops) + if (dev->dma_ops && dev->dma_ops->sync_single_for_device) dev->dma_ops->sync_single_for_device(dev, addr, size, dir); else dma_sync_single_for_device(dev->dma_device, addr, size, dir); @@ -2361,7 +2361,7 @@ static inline void *ib_dma_alloc_coherent(struct ib_device *dev, u64 *dma_handle, gfp_t flag) { - if (dev->dma_ops) + if (dev->dma_ops && dev->dma_ops->alloc_coherent) return dev->dma_ops->alloc_coherent(dev, size, dma_handle, flag); else { dma_addr_t handle; @@ -2384,7 +2384,7 @@ static inline void ib_dma_free_coherent(struct ib_device *dev, size_t size, void *cpu_addr, u64 dma_handle) { - if (dev->dma_ops) + if (dev->dma_ops && dev->dma_ops->free_coherent) dev->dma_ops->free_coherent(dev, size, cpu_addr, dma_handle); else dma_free_coherent(dev->dma_device, size, cpu_addr, dma_handle);