From patchwork Thu Jan 5 14:16:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 9499071 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A0F0B606B5 for ; Thu, 5 Jan 2017 14:16:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9202F283F8 for ; Thu, 5 Jan 2017 14:16:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 86419283F3; Thu, 5 Jan 2017 14:16:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 23BFF283F3 for ; Thu, 5 Jan 2017 14:16:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759730AbdAEOQj (ORCPT ); Thu, 5 Jan 2017 09:16:39 -0500 Received: from mail.kernel.org ([198.145.29.136]:58916 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751462AbdAEOQi (ORCPT ); Thu, 5 Jan 2017 09:16:38 -0500 Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E360B204D2; Thu, 5 Jan 2017 14:16:36 +0000 (UTC) Received: from localhost (unknown [213.57.247.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 507A6200DC; Thu, 5 Jan 2017 14:16:35 +0000 (UTC) From: Leon Romanovsky To: dledford@redhat.com Cc: linux-rdma@vger.kernel.org, Moni Shoua Subject: [PATCH rdma-next] IB/cma: Allow port reuse for rdma_id Date: Thu, 5 Jan 2017 16:16:36 +0200 Message-Id: <20170105141636.23284-2-leon@kernel.org> X-Mailer: git-send-email 2.10.2 In-Reply-To: <20170105141636.23284-1-leon@kernel.org> References: <20170105141636.23284-1-leon@kernel.org> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Moni Shoua When allocating a port number for binding to a rdma_id, assuming the allocation is not for a specific port, the rule is to allow only ports that were not in use before by any other rdma_id. This condition is too strong to achieve the goal of a unique 5 tuple rdma_id. Instead, we can compare current rdma_id with other rdma_id for difference in one of destination port, source address and destination address to allow port reuse. Signed-off-by: Moni Shoua Signed-off-by: Leon Romanovsky Acked-by: Sean Hefty --- drivers/infiniband/core/cma.c | 67 ++++++++++++++++++++++++++++++++++++++----- 1 file changed, 60 insertions(+), 7 deletions(-) -- 2.10.2 -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c index 847c5ad..bd8d051 100644 --- a/drivers/infiniband/core/cma.c +++ b/drivers/infiniband/core/cma.c @@ -2843,20 +2843,26 @@ int rdma_resolve_addr(struct rdma_cm_id *id, struct sockaddr *src_addr, int ret; id_priv = container_of(id, struct rdma_id_private, id); + memcpy(cma_dst_addr(id_priv), dst_addr, rdma_addr_size(dst_addr)); if (id_priv->state == RDMA_CM_IDLE) { ret = cma_bind_addr(id, src_addr, dst_addr); - if (ret) + if (ret) { + memset(cma_dst_addr(id_priv), 0, rdma_addr_size(dst_addr)); return ret; + } } - if (cma_family(id_priv) != dst_addr->sa_family) + if (cma_family(id_priv) != dst_addr->sa_family) { + memset(cma_dst_addr(id_priv), 0, rdma_addr_size(dst_addr)); return -EINVAL; + } - if (!cma_comp_exch(id_priv, RDMA_CM_ADDR_BOUND, RDMA_CM_ADDR_QUERY)) + if (!cma_comp_exch(id_priv, RDMA_CM_ADDR_BOUND, RDMA_CM_ADDR_QUERY)) { + memset(cma_dst_addr(id_priv), 0, rdma_addr_size(dst_addr)); return -EINVAL; + } atomic_inc(&id_priv->refcount); - memcpy(cma_dst_addr(id_priv), dst_addr, rdma_addr_size(dst_addr)); if (cma_any_addr(dst_addr)) { ret = cma_resolve_loopback(id_priv); } else { @@ -2972,6 +2978,43 @@ static int cma_alloc_port(enum rdma_port_space ps, return ret == -ENOSPC ? -EADDRNOTAVAIL : ret; } +static int cma_port_is_unique(struct rdma_bind_list *bind_list, + struct rdma_id_private *id_priv) +{ + struct rdma_id_private *cur_id; + struct sockaddr *daddr = cma_dst_addr(id_priv); + struct sockaddr *saddr = cma_src_addr(id_priv); + __be16 dport = cma_port(daddr); + + hlist_for_each_entry(cur_id, &bind_list->owners, node) { + struct sockaddr *cur_daddr = cma_dst_addr(cur_id); + struct sockaddr *cur_saddr = cma_src_addr(cur_id); + __be16 cur_dport = cma_port(cur_daddr); + + if (id_priv == cur_id) + continue; + + /* different dest port -> unique */ + if (!cma_any_port(cur_daddr) && + (dport != cur_dport)) + continue; + + /* different src address -> unique */ + if (!cma_any_addr(saddr) && + !cma_any_addr(cur_saddr) && + cma_addr_cmp(saddr, cur_saddr)) + continue; + + /* different dst address -> unique */ + if (!cma_any_addr(cur_daddr) && + cma_addr_cmp(daddr, cur_daddr)) + continue; + + return -EADDRNOTAVAIL; + } + return 0; +} + static int cma_alloc_any_port(enum rdma_port_space ps, struct rdma_id_private *id_priv) { @@ -2984,9 +3027,19 @@ static int cma_alloc_any_port(enum rdma_port_space ps, remaining = (high - low) + 1; rover = prandom_u32() % remaining + low; retry: - if (last_used_port != rover && - !cma_ps_find(net, ps, (unsigned short)rover)) { - int ret = cma_alloc_port(ps, id_priv, rover); + if (last_used_port != rover) { + struct rdma_bind_list *bind_list; + int ret; + + bind_list = cma_ps_find(net, ps, (unsigned short)rover); + + if (!bind_list) { + ret = cma_alloc_port(ps, id_priv, rover); + } else { + ret = cma_port_is_unique(bind_list, id_priv); + if (!ret) + cma_bind_port(bind_list, id_priv); + } /* * Remember previously used port number in order to avoid * re-using same port immediately after it is closed.