From patchwork Thu Jan 30 11:49:48 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eli Cohen X-Patchwork-Id: 3556511 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 57404C02DC for ; Thu, 30 Jan 2014 11:50:23 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 42187201E9 for ; Thu, 30 Jan 2014 11:50:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6E62A201E7 for ; Thu, 30 Jan 2014 11:50:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753044AbaA3LuK (ORCPT ); Thu, 30 Jan 2014 06:50:10 -0500 Received: from mail-wg0-f42.google.com ([74.125.82.42]:48048 "EHLO mail-wg0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752844AbaA3LuH (ORCPT ); Thu, 30 Jan 2014 06:50:07 -0500 Received: by mail-wg0-f42.google.com with SMTP id l18so10084144wgh.1 for ; Thu, 30 Jan 2014 03:50:06 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=bkQgyLSBx8LtYsmFr7iYnLGKDM/AJYDDI2R6C/sTTTA=; b=YrC8O14SPwgOkeh7K44DMSW0RJWbWl04LAvm1xbx06iHRVaBBnXryvS2SkxlPMC2bi E1zZXF3G7NNu5fOoSvG8ShEkBItJavZirKhZM76U+2MXoigBGtGyBSLkPZ1IwO3cxyul MyphDfFMGcejHGDSDIRY24ri2S3KTzC5mMW0KvzDmIKTqu33NjvzM/MLc9PoJO4PzvN1 Rw9K+3D8EExh0awKB/cvg7rrmRH60k93RYgt/EwFr260zd94gHHS031aTXOG86movUIb oJrUBEnm8MgC+dGR6anNS7eE+5D4XvSY6c3qtB7nG9JW1wpErqdHqTYo3mb5XH1gpera Z05w== X-Gm-Message-State: ALoCoQmq5TA2n+LIxEdnuwC1MT40NPPt5CRQpswDuzzeMTYPqUnAMhbkb5XKkB/HggSJNdkKlK2f X-Received: by 10.180.198.52 with SMTP id iz20mr22780722wic.59.1391082606101; Thu, 30 Jan 2014 03:50:06 -0800 (PST) Received: from localhost (out.voltaire.com. [193.47.165.251]) by mx.google.com with ESMTPSA id q5sm12034492wia.2.2014.01.30.03.50.04 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Thu, 30 Jan 2014 03:50:05 -0800 (PST) From: Eli Cohen To: roland@kernel.org Cc: ydroneaud@opteya.com, linux-rdma@vger.kernel.org, ogerlitz@mellanox.com, Eli Cohen Subject: [PATCH v2] IB/mlx5: Fix binary compatibility with libmlx5 Date: Thu, 30 Jan 2014 13:49:48 +0200 Message-Id: <1391082588-29077-1-git-send-email-eli@mellanox.com> X-Mailer: git-send-email 1.8.5.2 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-7.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Commit c1be5232d21d "Fix micro UAR allocator" broke binary compatibility between libmlx5 and mlx5_ib since it defines a different value to the number of micro UARs per page, leading to wrong calculation in libmlx5. This patch defines struct mlx5_ib_alloc_ucontext_req_v2 as an extension to struct mlx5_ib_alloc_ucontext_req. The extended size is determined in mlx5_ib_alloc_ucontext() and in case of old library we use uuarn 0 which works fine -- this is acheived due to create_user_qp() falling back from high to medium then to low class where low class will return 0. For new libraries we use the more sophisticated allocation algorithm. Fixes: c1be5232d21d ('Fix micro UAR allocator') Signed-off-by: Eli Cohen Reviewed-by: Yann Droneaud --- Changes from v1: Use 12 digit hash code as required Fix mlx5_ib_alloc_ucontext to subtract the size of struct ib_uverbs_cmd_hdr) Improve change log documentation Remove whitesapce drivers/infiniband/hw/mlx5/main.c | 19 +++++++++++++++++-- drivers/infiniband/hw/mlx5/qp.c | 10 ++++++++-- drivers/infiniband/hw/mlx5/user.h | 7 +++++++ include/linux/mlx5/driver.h | 1 + 4 files changed, 33 insertions(+), 4 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 9660d093f8cf..f4ef4a24d410 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -536,24 +536,38 @@ static struct ib_ucontext *mlx5_ib_alloc_ucontext(struct ib_device *ibdev, struct ib_udata *udata) { struct mlx5_ib_dev *dev = to_mdev(ibdev); - struct mlx5_ib_alloc_ucontext_req req; + struct mlx5_ib_alloc_ucontext_req_v2 req; struct mlx5_ib_alloc_ucontext_resp resp; struct mlx5_ib_ucontext *context; struct mlx5_uuar_info *uuari; struct mlx5_uar *uars; int gross_uuars; int num_uars; + int ver; int uuarn; int err; int i; + int reqlen; if (!dev->ib_active) return ERR_PTR(-EAGAIN); - err = ib_copy_from_udata(&req, udata, sizeof(req)); + memset(&req, 0, sizeof(req)); + reqlen = udata->inlen - sizeof(struct ib_uverbs_cmd_hdr); + if (reqlen == sizeof(struct mlx5_ib_alloc_ucontext_req)) + ver = 0; + else if (reqlen == sizeof(struct mlx5_ib_alloc_ucontext_req_v2)) + ver = 2; + else + return ERR_PTR(-EINVAL); + + err = ib_copy_from_udata(&req, udata, reqlen); if (err) return ERR_PTR(err); + if (req.flags || req.reserved) + return ERR_PTR(-EINVAL); + if (req.total_num_uuars > MLX5_MAX_UUARS) return ERR_PTR(-ENOMEM); @@ -626,6 +640,7 @@ static struct ib_ucontext *mlx5_ib_alloc_ucontext(struct ib_device *ibdev, if (err) goto out_uars; + uuari->ver = ver; uuari->num_low_latency_uuars = req.num_low_latency_uuars; uuari->uars = uars; uuari->num_uars = num_uars; diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 492dc330e907..091576a777e9 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -430,11 +430,17 @@ static int alloc_uuar(struct mlx5_uuar_info *uuari, break; case MLX5_IB_LATENCY_CLASS_MEDIUM: - uuarn = alloc_med_class_uuar(uuari); + if (uuari->ver < 2) + uuarn = -ENOMEM; + else + uuarn = alloc_med_class_uuar(uuari); break; case MLX5_IB_LATENCY_CLASS_HIGH: - uuarn = alloc_high_class_uuar(uuari); + if (uuari->ver < 2) + uuarn = -ENOMEM; + else + uuarn = alloc_high_class_uuar(uuari); break; case MLX5_IB_LATENCY_CLASS_FAST_PATH: diff --git a/drivers/infiniband/hw/mlx5/user.h b/drivers/infiniband/hw/mlx5/user.h index 32a2a5dfc523..0f4f8e42a17f 100644 --- a/drivers/infiniband/hw/mlx5/user.h +++ b/drivers/infiniband/hw/mlx5/user.h @@ -62,6 +62,13 @@ struct mlx5_ib_alloc_ucontext_req { __u32 num_low_latency_uuars; }; +struct mlx5_ib_alloc_ucontext_req_v2 { + __u32 total_num_uuars; + __u32 num_low_latency_uuars; + __u32 flags; + __u32 reserved; +}; + struct mlx5_ib_alloc_ucontext_resp { __u32 qp_tab_size; __u32 bf_reg_size; diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index 554548cd3dd4..32cb18c399c2 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -227,6 +227,7 @@ struct mlx5_uuar_info { * protect uuar allocation data structs */ struct mutex lock; + u32 ver; }; struct mlx5_bf {