From patchwork Thu May 8 06:52:39 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Or Gerlitz X-Patchwork-Id: 4133501 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id DC754BFF02 for ; Thu, 8 May 2014 06:52:47 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id ECE22202B8 for ; Thu, 8 May 2014 06:52:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9369620251 for ; Thu, 8 May 2014 06:52:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751516AbaEHGwo (ORCPT ); Thu, 8 May 2014 02:52:44 -0400 Received: from mailp.voltaire.com ([193.47.165.129]:53958 "EHLO mellanox.co.il" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1751513AbaEHGwo (ORCPT ); Thu, 8 May 2014 02:52:44 -0400 Received: from Internal Mail-Server by MTLPINE1 (envelope-from ogerlitz@mellanox.com) with SMTP; 8 May 2014 09:52:41 +0300 Received: from r-vnc04.mtr.labs.mlnx (r-vnc04.mtr.labs.mlnx [10.208.0.116]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id s486qfXa008067; Thu, 8 May 2014 09:52:41 +0300 From: Or Gerlitz To: yishaih@mellanox.com Cc: linux-rdma@vger.kernel.org, roland@kernel.org, matanb@mellanox.com, dledford@redhat.com, Or Gerlitz Subject: [PATCH libmlx4 V2 1/2] Add RoCE IP based addressing support for UD QPs Date: Thu, 8 May 2014 09:52:39 +0300 Message-Id: <1399531960-30738-2-git-send-email-ogerlitz@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1399531960-30738-1-git-send-email-ogerlitz@mellanox.com> References: <1399531960-30738-1-git-send-email-ogerlitz@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Matan Barak In order to implement IP based addressing for UD QPs, we need a way to resolve the addresses internally. The L2 params are passed to the provider driver using an extension verbs - drv_ibv_create_ah_ex. libmlx4 gets the extra mac and vid params from libibverbs and sets mlx4_ah relevant attributes. Signed-off-by: Matan Barak Signed-off-by: Or Gerlitz --- src/mlx4.c | 4 ++- src/mlx4.h | 2 + src/verbs.c | 90 +++++++++++++++++++++++++++++++++++++++++++++++++++------- 3 files changed, 84 insertions(+), 12 deletions(-) diff --git a/src/mlx4.c b/src/mlx4.c index 2999150..5943750 100644 --- a/src/mlx4.c +++ b/src/mlx4.c @@ -196,7 +196,8 @@ static int mlx4_init_context(struct verbs_device *v_device, ibv_ctx->ops = mlx4_ctx_ops; verbs_ctx->has_comp_mask = VERBS_CONTEXT_XRCD | VERBS_CONTEXT_SRQ | - VERBS_CONTEXT_QP; + VERBS_CONTEXT_QP | + VERBS_CONTEXT_CREATE_AH; verbs_set_ctx_op(verbs_ctx, close_xrcd, mlx4_close_xrcd); verbs_set_ctx_op(verbs_ctx, open_xrcd, mlx4_open_xrcd); verbs_set_ctx_op(verbs_ctx, create_srq_ex, mlx4_create_srq_ex); @@ -205,6 +206,7 @@ static int mlx4_init_context(struct verbs_device *v_device, verbs_set_ctx_op(verbs_ctx, open_qp, mlx4_open_qp); verbs_set_ctx_op(verbs_ctx, drv_ibv_create_flow, ibv_cmd_create_flow); verbs_set_ctx_op(verbs_ctx, drv_ibv_destroy_flow, ibv_cmd_destroy_flow); + verbs_set_ctx_op(verbs_ctx, drv_ibv_create_ah_ex, mlx4_create_ah_ex); return 0; diff --git a/src/mlx4.h b/src/mlx4.h index d71450f..3015357 100644 --- a/src/mlx4.h +++ b/src/mlx4.h @@ -431,6 +431,8 @@ struct mlx4_qp *mlx4_find_qp(struct mlx4_context *ctx, uint32_t qpn); int mlx4_store_qp(struct mlx4_context *ctx, uint32_t qpn, struct mlx4_qp *qp); void mlx4_clear_qp(struct mlx4_context *ctx, uint32_t qpn); struct ibv_ah *mlx4_create_ah(struct ibv_pd *pd, struct ibv_ah_attr *attr); +struct ibv_ah *mlx4_create_ah_ex(struct ibv_pd *pd, + struct ibv_ah_attr_ex *attr_ex); int mlx4_destroy_ah(struct ibv_ah *ah); int mlx4_alloc_av(struct mlx4_pd *pd, struct ibv_ah_attr *attr, struct mlx4_ah *ah); diff --git a/src/verbs.c b/src/verbs.c index 623d576..e322a34 100644 --- a/src/verbs.c +++ b/src/verbs.c @@ -783,13 +783,11 @@ static int mlx4_resolve_grh_to_l2(struct ibv_pd *pd, struct mlx4_ah *ah, return 0; } -struct ibv_ah *mlx4_create_ah(struct ibv_pd *pd, struct ibv_ah_attr *attr) +static struct ibv_ah *mlx4_create_ah_common(struct ibv_pd *pd, + struct ibv_ah_attr *attr, + uint8_t link_layer) { struct mlx4_ah *ah; - struct ibv_port_attr port_attr; - - if (ibv_query_port(pd->context, attr->port_num, &port_attr)) - return NULL; ah = malloc(sizeof *ah); if (!ah) @@ -799,7 +797,7 @@ struct ibv_ah *mlx4_create_ah(struct ibv_pd *pd, struct ibv_ah_attr *attr) ah->av.port_pd = htonl(to_mpd(pd)->pdn | (attr->port_num << 24)); - if (port_attr.link_layer != IBV_LINK_LAYER_ETHERNET) { + if (link_layer != IBV_LINK_LAYER_ETHERNET) { ah->av.g_slid = attr->src_path_bits; ah->av.dlid = htons(attr->dlid); ah->av.sl_tclass_flowlabel = htonl(attr->sl << 28); @@ -820,13 +818,83 @@ struct ibv_ah *mlx4_create_ah(struct ibv_pd *pd, struct ibv_ah_attr *attr) memcpy(ah->av.dgid, attr->grh.dgid.raw, 16); } - if (port_attr.link_layer == IBV_LINK_LAYER_ETHERNET) - if (mlx4_resolve_grh_to_l2(pd, ah, attr)) { - free(ah); - return NULL; + return &ah->ibv_ah; +} + +struct ibv_ah *mlx4_create_ah(struct ibv_pd *pd, struct ibv_ah_attr *attr) +{ + struct ibv_ah *ah; + struct ibv_port_attr port_attr; + + if (ibv_query_port(pd->context, attr->port_num, &port_attr)) + return NULL; + + ah = mlx4_create_ah_common(pd, attr, port_attr.link_layer); + if (NULL != ah && + (port_attr.link_layer != IBV_LINK_LAYER_ETHERNET || + !mlx4_resolve_grh_to_l2(pd, to_mah(ah), attr))) + return ah; + + if (ah) + free(ah); + return NULL; +} + +struct ibv_ah *mlx4_create_ah_ex(struct ibv_pd *pd, + struct ibv_ah_attr_ex *attr_ex) +{ + struct ibv_port_attr port_attr; + struct ibv_ah *ah; + struct mlx4_ah *mah; + + if (ibv_query_port(pd->context, attr_ex->port_num, &port_attr)) + return NULL; + + ah = mlx4_create_ah_common(pd, (struct ibv_ah_attr *)attr_ex, + port_attr.link_layer); + + if (NULL == ah) + return NULL; + + mah = to_mah(ah); + + /* If vlan was given, check that we could use it */ + if (attr_ex->comp_mask & IBV_AH_ATTR_EX_VID && + attr_ex->vid <= 0xfff && + (0 == attr_ex->ll_address.len || + !(attr_ex->comp_mask & IBV_AH_ATTR_EX_LL))) + goto err; + + /* ll_address.len == 0 means no ll address given */ + if (attr_ex->comp_mask & IBV_AH_ATTR_EX_LL && + 0 != attr_ex->ll_address.len) { + if (LL_ADDRESS_ETH != attr_ex->ll_address.type || + port_attr.link_layer != IBV_LINK_LAYER_ETHERNET) + /* mlx4 provider currently only support ethernet + * extensions */ + goto err; + + /* link layer is ethernet */ + if (6 != attr_ex->ll_address.len || + NULL == attr_ex->ll_address.address) + goto err; + + memcpy(mah->mac, attr_ex->ll_address.address, + attr_ex->ll_address.len); + + if (attr_ex->comp_mask & IBV_AH_ATTR_EX_VID && + attr_ex->vid <= 0xfff) { + mah->av.port_pd |= htonl(1 << 29); + mah->vlan = attr_ex->vid | + ((attr_ex->sl & 7) << 13); } + } - return &ah->ibv_ah; + return ah; + +err: + free(ah); + return NULL; } int mlx4_destroy_ah(struct ibv_ah *ah)