From patchwork Fri Jul 1 13:18:56 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Pearson X-Patchwork-Id: 938902 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.4) with ESMTP id p61LTcRx014702 for ; Fri, 1 Jul 2011 21:51:09 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757463Ab1GAVOQ (ORCPT ); Fri, 1 Jul 2011 17:14:16 -0400 Received: from cdptpa-bc-oedgelb.mail.rr.com ([75.180.133.32]:58949 "EHLO cdptpa-bc-oedgelb.mail.rr.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757417Ab1GAVOP (ORCPT ); Fri, 1 Jul 2011 17:14:15 -0400 X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Fri, 01 Jul 2011 21:51:10 +0000 (UTC) X-Greylist: delayed 300 seconds by postgrey-1.27 at vger.kernel.org; Fri, 01 Jul 2011 17:13:31 EDT Received: from cdptpa-bc-oedgelb.mail.rr.com ([10.127.134.103]) by cdptpa-bc-qmta02.mail.rr.com with ESMTP id <20110701211004919.IHBE4903@cdptpa-bc-qmta02.mail.rr.com> for ; Fri, 1 Jul 2011 21:10:04 +0000 Authentication-Results: cdptpa-bc-oedgelb.mail.rr.com smtp.user=fzago@systemfabricworks.com; auth=pass (PLAIN) X-Authority-Analysis: v=1.1 cv=40Z/dbZBr1wgzPkGSf8y7qdCkiWp+M7NvixVUiz+qMg= c=1 sm=0 a=hAzdGUM1iB0A:10 a=ozIaqLvjkoIA:10 a=DCwX0kaxZCiV3mmbfDr8nQ==:17 a=YORvzBCaAAAA:8 a=bC7xisPkAAAA:8 a=qdjOWkR39E1vlyNJKRUA:9 a=M-DLjl4rzFIWgXoULLoA:7 a=QLxd5cu_Zb8A:10 a=VV2__AUApEoA:10 a=GsZYoTZPDDRj3AH9:21 a=9iF6rpaeFVNdZpwW:21 a=DCwX0kaxZCiV3mmbfDr8nQ==:117 X-Cloudmark-Score: 0 X-Originating-IP: 67.79.195.91 Received: from [67.79.195.91] ([67.79.195.91:36998] helo=[10.0.2.91]) by cdptpa-bc-oedge02.mail.rr.com (envelope-from ) (ecelerity 2.2.3.46 r()) with ESMTPA id 62/18-12017-6673E0E4; Fri, 01 Jul 2011 21:08:54 +0000 Message-Id: <20110701132202.390441142@systemfabricworks.com> References: <20110701131821.928693424@systemfabricworks.com> User-Agent: quilt/0.46-1 Date: Fri, 01 Jul 2011 08:18:56 -0500 From: rpearson@systemfabricworks.com To: linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [patch 35/44] rxe_dma.c Content-Disposition: inline; filename=patch35 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Dummy dma processing for rxe device. Signed-off-by: Bob Pearson --- drivers/infiniband/hw/rxe/rxe_dma.c | 178 ++++++++++++++++++++++++++++++++++++ 1 file changed, 178 insertions(+) Index: infiniband/drivers/infiniband/hw/rxe/rxe_dma.c =================================================================== --- /dev/null +++ infiniband/drivers/infiniband/hw/rxe/rxe_dma.c @@ -0,0 +1,178 @@ +/* + * Copyright (c) 2009-2011 Mellanox Technologies Ltd. All rights reserved. + * Copyright (c) 2009-2011 System Fabric Works, Inc. All rights reserved. + * Copyright (c) 2006 QLogic, Corporation. All rights reserved. + * + * This software is available to you under a choice of one of two + * licenses. You may choose to be licensed under the terms of the GNU + * General Public License (GPL) Version 2, available from the file + * COPYING in the main directory of this source tree, or the + * OpenIB.org BSD license below: + * + * Redistribution and use in source and binary forms, with or + * without modification, are permitted provided that the following + * conditions are met: + * + * - Redistributions of source code must retain the above + * copyright notice, this list of conditions and the following + * disclaimer. + * + * - Redistributions in binary form must reproduce the above + * copyright notice, this list of conditions and the following + * disclaimer in the documentation and/or other materials + * provided with the distribution. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + +#include "rxe.h" +#include "rxe_loc.h" + +static int rxe_mapping_error(struct ib_device *dev, u64 dma_addr) +{ + return dma_addr == 0; +} + +static u64 rxe_dma_map_single(struct ib_device *dev, + void *cpu_addr, size_t size, + enum dma_data_direction direction) +{ + BUG_ON(!valid_dma_direction(direction)); + return (u64) (uintptr_t) cpu_addr; +} + +static void rxe_dma_unmap_single(struct ib_device *dev, + u64 addr, size_t size, + enum dma_data_direction direction) +{ + BUG_ON(!valid_dma_direction(direction)); +} + +static u64 rxe_dma_map_page(struct ib_device *dev, + struct page *page, + unsigned long offset, + size_t size, enum dma_data_direction direction) +{ + u64 addr = 0; + + BUG_ON(!valid_dma_direction(direction)); + + if (offset + size > PAGE_SIZE) + goto out; + + addr = (uintptr_t) page_address(page); + if (addr) + addr += offset; + +out: + return addr; +} + +static void rxe_dma_unmap_page(struct ib_device *dev, + u64 addr, size_t size, + enum dma_data_direction direction) +{ + BUG_ON(!valid_dma_direction(direction)); +} + +static int rxe_map_sg(struct ib_device *dev, struct scatterlist *sgl, + int nents, enum dma_data_direction direction) +{ + struct scatterlist *sg; + u64 addr; + int i; + int ret = nents; + + BUG_ON(!valid_dma_direction(direction)); + + for_each_sg(sgl, sg, nents, i) { + addr = (uintptr_t) page_address(sg_page(sg)); + /* TODO: handle highmem pages */ + if (!addr) { + ret = 0; + break; + } + } + + return ret; +} + +static void rxe_unmap_sg(struct ib_device *dev, + struct scatterlist *sg, int nents, + enum dma_data_direction direction) +{ + BUG_ON(!valid_dma_direction(direction)); +} + +static u64 rxe_sg_dma_address(struct ib_device *dev, struct scatterlist *sg) +{ + u64 addr = (uintptr_t) page_address(sg_page(sg)); + + if (addr) + addr += sg->offset; + + return addr; +} + +static unsigned int rxe_sg_dma_len(struct ib_device *dev, + struct scatterlist *sg) +{ + return sg->length; +} + +static void rxe_sync_single_for_cpu(struct ib_device *dev, + u64 addr, + size_t size, enum dma_data_direction dir) +{ +} + +static void rxe_sync_single_for_device(struct ib_device *dev, + u64 addr, + size_t size, enum dma_data_direction dir) +{ +} + +static void *rxe_dma_alloc_coherent(struct ib_device *dev, size_t size, + u64 *dma_handle, gfp_t flag) +{ + struct page *p; + void *addr = NULL; + + p = alloc_pages(flag, get_order(size)); + if (p) + addr = page_address(p); + + if (dma_handle) + *dma_handle = (uintptr_t) addr; + + return addr; +} + +static void rxe_dma_free_coherent(struct ib_device *dev, size_t size, + void *cpu_addr, u64 dma_handle) +{ + free_pages((unsigned long)cpu_addr, get_order(size)); +} + +struct ib_dma_mapping_ops rxe_dma_mapping_ops = { + rxe_mapping_error, + rxe_dma_map_single, + rxe_dma_unmap_single, + rxe_dma_map_page, + rxe_dma_unmap_page, + rxe_map_sg, + rxe_unmap_sg, + rxe_sg_dma_address, + rxe_sg_dma_len, + rxe_sync_single_for_cpu, + rxe_sync_single_for_device, + rxe_dma_alloc_coherent, + rxe_dma_free_coherent +};