From patchwork Mon Dec 10 19:09:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722251 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5AEDE3E9D for ; Mon, 10 Dec 2018 19:10:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4BB002A795 for ; Mon, 10 Dec 2018 19:10:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 403482A7EE; Mon, 10 Dec 2018 19:10:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CB7272A87C for ; Mon, 10 Dec 2018 19:10:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727156AbeLJTKW (ORCPT ); Mon, 10 Dec 2018 14:10:22 -0500 Received: from mail-wm1-f66.google.com ([209.85.128.66]:38761 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727246AbeLJTKW (ORCPT ); Mon, 10 Dec 2018 14:10:22 -0500 Received: by mail-wm1-f66.google.com with SMTP id m22so12524236wml.3 for ; Mon, 10 Dec 2018 11:10:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=s9J1l5oOpqn7dFBXHSsD50YArQwa6t/Llsi3XlzmWtY=; b=jerwMa4bMAeZt0eYpQurwkXy+Nnpdw+9E4wqne7xBm3XaurdOoefBH6gXaqrH2U9Zp V8RMWlb2jctSu5VgZunnKJxObcLuUVe5D1OO/BiO5CBYBHVsg5z07d3DIf8GYAfv8dQd 1o5cAjhXV2nu0pFhzQwC3XUEb1yaPAfnFRPyVevf82QSpWLGaO51R6oj6Ivu+9XXaoH4 7ZsD30IvqzQr2DryUOKI5Y4eNWfCuNq/gKF/5yq1ZTJIWBRl9RlTIBumHhSw5ftb91sV QgopcQg0juMsrHPe7jjFfd/WYDI7tL3hOc8JJAd6LOgY0MR1whsaQW/8iGAWcYNMaM9H AuGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=s9J1l5oOpqn7dFBXHSsD50YArQwa6t/Llsi3XlzmWtY=; b=shOzY6D4SBZZmFG16ZhLkvJzgUR8ebApkF78fCrAMWxDgAaNh0kL0xVsMSx73VliaI CVp117f/OtHb76T2bdsOwg2YEVY94E1hzqxRU9TjzZw746TqhPmvWl2Qvk4q9oStIMBT fN8IqxGscKlMRqrz+Eeap15kXMefRLVpKsq22Ut18cL2p9+cnp05b+oqwLs5rwUB3/Ug JBpeM/o3W7Ge57mPTAK6IxCWoHHDZmYCtx4GsLYh2qqiyeb2Ttz/9TK5KOh8MpuWGq+f 62sat4cSnx9WQ8Pg4yoIgwMzDyZjcsxJJzIx1zDu21bszgNv1SyXiy++ZMr81gCb5s2T wVvA== X-Gm-Message-State: AA+aEWYSwvzVXGiJUo69bGb/JZK0MsqD+EltSu4ClsxGWlGMJxF5dmTT HFMHuJFS9JE1lHxPW2OnevfzhPDp X-Google-Smtp-Source: AFSGD/WnXaL5UsGxMOxglGhHddAcP2g+nbo/NX5+b9pUGDzFjBDFS6aHLy2jS6bdibfek+YsikY5mA== X-Received: by 2002:a1c:4855:: with SMTP id v82mr6615410wma.15.1544469019846; Mon, 10 Dec 2018 11:10:19 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.18 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:19 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com, Jason Gunthorpe Subject: [PATCH rdma-next v5 01/20] RDMA/rdmavt: Fix rvt_create_ah function signature Date: Mon, 10 Dec 2018 21:09:29 +0200 Message-Id: <20181210190948.6892-2-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP rdmavt uses a crazy system that looses the type checking when assinging functions to struct ib_device function pointers. Because of this the signature to this function was not changed when the below commit revised things. Fix the signature so we are not calling a function pointer with a mismatched signature. Fixes: 477864c8fcd9 ("IB/core: Let create_ah return extended response to user") Signed-off-by: Kamal Heib Reviewed-by: Dennis Dalessandro Signed-off-by: Jason Gunthorpe --- drivers/infiniband/sw/rdmavt/ah.c | 4 +++- drivers/infiniband/sw/rdmavt/ah.h | 3 ++- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/sw/rdmavt/ah.c b/drivers/infiniband/sw/rdmavt/ah.c index 89ec0f64abfc..084bb4baebb5 100644 --- a/drivers/infiniband/sw/rdmavt/ah.c +++ b/drivers/infiniband/sw/rdmavt/ah.c @@ -91,13 +91,15 @@ EXPORT_SYMBOL(rvt_check_ah); * rvt_create_ah - create an address handle * @pd: the protection domain * @ah_attr: the attributes of the AH + * @udata: pointer to user's input output buffer information. * * This may be called from interrupt context. * * Return: newly allocated ah */ struct ib_ah *rvt_create_ah(struct ib_pd *pd, - struct rdma_ah_attr *ah_attr) + struct rdma_ah_attr *ah_attr, + struct ib_udata *udata) { struct rvt_ah *ah; struct rvt_dev_info *dev = ib_to_rvt(pd->device); diff --git a/drivers/infiniband/sw/rdmavt/ah.h b/drivers/infiniband/sw/rdmavt/ah.h index 16105af99189..25271b48a683 100644 --- a/drivers/infiniband/sw/rdmavt/ah.h +++ b/drivers/infiniband/sw/rdmavt/ah.h @@ -51,7 +51,8 @@ #include struct ib_ah *rvt_create_ah(struct ib_pd *pd, - struct rdma_ah_attr *ah_attr); + struct rdma_ah_attr *ah_attr, + struct ib_udata *udata); int rvt_destroy_ah(struct ib_ah *ibah); int rvt_modify_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr); int rvt_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr); From patchwork Mon Dec 10 19:09:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722253 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0E6FA18E8 for ; Mon, 10 Dec 2018 19:10:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F24C22A88D for ; Mon, 10 Dec 2018 19:10:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E54352AC76; Mon, 10 Dec 2018 19:10:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C533A2A724 for ; Mon, 10 Dec 2018 19:10:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728398AbeLJTKY (ORCPT ); Mon, 10 Dec 2018 14:10:24 -0500 Received: from mail-wr1-f66.google.com ([209.85.221.66]:42198 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727244AbeLJTKY (ORCPT ); Mon, 10 Dec 2018 14:10:24 -0500 Received: by mail-wr1-f66.google.com with SMTP id q18so11634901wrx.9 for ; Mon, 10 Dec 2018 11:10:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PscOJi9bAr1jL2ivPSTqhjfUHKNmvH6BJl/U2z6xc4o=; b=HHtYQ/4ZkSsMG8DkkTIkcvxCMODp/VEOsFk7jrDgZ8/yYC3Kz0cmDo3fPiN/Ammjg6 H6NB1EnzeVSPbdsKXFVnP0w8bMOB2ybwwA8W9GinxJq6UJ+BQ72gPj0l7v6VfLgd/JK5 Wg4kVn/GzLLdV4MrPYAoXOZBbph+ii9oILK0wWxw2L5fKh5jOm276WevuTtNbWp8Cp3L zYh75kZEnYLtbIylsnLByZbm5Rz9ht0LeenipLCX4w/DHkZ2ndTiMyu98b3mpkheUJLE WmNRKYrFnBvnZvExoDaQlwPSqZ7Dk/AATOrTLLmMFL+mz1cRZpExPu74V0dQQJ9bI6ve Z1JA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PscOJi9bAr1jL2ivPSTqhjfUHKNmvH6BJl/U2z6xc4o=; b=dWC1hxdA21MM+BhBZCEaUKZp5lccd+7uV9zlV2HtSos+UXo4TbtiaYJ3/fvLQAvZrt dV6n35jHAhAXXOSIR5/+YoQtAaeWY1jdDJbhiDPDtbjJtTjJnlnN1PPUCi7DBn/piJ8V JCBEHaQjRiRO1OW3XpzESbMK4ARTlThAiyN11BAyFRdHs36pFcgDA3eaaf1pEhB1se+N Brzzz4ACVR1ESYfDdt6Eodp9QiUg8OQc80NCITpmd1+RkLwe7Cvzax+QohxbxmL9acFE hu8ndX3ZFzVrq0agdnZwV1zG4mhS7sdZPt2yaAgb5wrgjf5VD8AKoQcTNDNX5CTpXJWW 8P4A== X-Gm-Message-State: AA+aEWbJllFrp1yTz+fJXbR6Ixu+NMN/yToSr7YYv3RWauSp/9zazA9S 8LvJ8wX+OWce97WasNg7Bws= X-Google-Smtp-Source: AFSGD/WJICxwQcj1CAltQm2DiYJz1CUPsJYs5uzOlPVgtBh5h9XQZv+rHWMZ2iGbLcCKHZxYP1jYdg== X-Received: by 2002:a5d:5607:: with SMTP id l7mr11056904wrv.25.1544469021081; Mon, 10 Dec 2018 11:10:21 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.19 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:20 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v5 02/20] RDMA/core: Introduce ib_device_ops Date: Mon, 10 Dec 2018 21:09:30 +0200 Message-Id: <20181210190948.6892-3-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This change introduce the ib_device_ops structure that defines all the InfiniBand device operations in one place, so the code will be more readable and clean, unlike today that the ops are mixed with ib_device data members. The providers will need to define the supported operations and assign them using ib_device_ops(), that will also make the providers code more readable and clean. Signed-off-by: Kamal Heib --- drivers/infiniband/core/device.c | 98 +++++++++++++ include/rdma/ib_verbs.h | 245 +++++++++++++++++++++++++++++++ 2 files changed, 343 insertions(+) diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c index 0027b0d79b09..3589894c46b8 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c @@ -1216,6 +1216,104 @@ struct net_device *ib_get_net_dev_by_params(struct ib_device *dev, } EXPORT_SYMBOL(ib_get_net_dev_by_params); +void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops) +{ +#define SET_DEVICE_OP(ptr, name) \ + do { \ + if (ops->name) \ + if (!((ptr)->name)) \ + (ptr)->name = ops->name; \ + } while (0) + + SET_DEVICE_OP(dev, add_gid); + SET_DEVICE_OP(dev, alloc_dm); + SET_DEVICE_OP(dev, alloc_fmr); + SET_DEVICE_OP(dev, alloc_hw_stats); + SET_DEVICE_OP(dev, alloc_mr); + SET_DEVICE_OP(dev, alloc_mw); + SET_DEVICE_OP(dev, alloc_pd); + SET_DEVICE_OP(dev, alloc_rdma_netdev); + SET_DEVICE_OP(dev, alloc_ucontext); + SET_DEVICE_OP(dev, alloc_xrcd); + SET_DEVICE_OP(dev, attach_mcast); + SET_DEVICE_OP(dev, check_mr_status); + SET_DEVICE_OP(dev, create_ah); + SET_DEVICE_OP(dev, create_counters); + SET_DEVICE_OP(dev, create_cq); + SET_DEVICE_OP(dev, create_flow); + SET_DEVICE_OP(dev, create_flow_action_esp); + SET_DEVICE_OP(dev, create_qp); + SET_DEVICE_OP(dev, create_rwq_ind_table); + SET_DEVICE_OP(dev, create_srq); + SET_DEVICE_OP(dev, create_wq); + SET_DEVICE_OP(dev, dealloc_dm); + SET_DEVICE_OP(dev, dealloc_fmr); + SET_DEVICE_OP(dev, dealloc_mw); + SET_DEVICE_OP(dev, dealloc_pd); + SET_DEVICE_OP(dev, dealloc_ucontext); + SET_DEVICE_OP(dev, dealloc_xrcd); + SET_DEVICE_OP(dev, del_gid); + SET_DEVICE_OP(dev, dereg_mr); + SET_DEVICE_OP(dev, destroy_ah); + SET_DEVICE_OP(dev, destroy_counters); + SET_DEVICE_OP(dev, destroy_cq); + SET_DEVICE_OP(dev, destroy_flow); + SET_DEVICE_OP(dev, destroy_flow_action); + SET_DEVICE_OP(dev, destroy_qp); + SET_DEVICE_OP(dev, destroy_rwq_ind_table); + SET_DEVICE_OP(dev, destroy_srq); + SET_DEVICE_OP(dev, destroy_wq); + SET_DEVICE_OP(dev, detach_mcast); + SET_DEVICE_OP(dev, disassociate_ucontext); + SET_DEVICE_OP(dev, drain_rq); + SET_DEVICE_OP(dev, drain_sq); + SET_DEVICE_OP(dev, get_dev_fw_str); + SET_DEVICE_OP(dev, get_dma_mr); + SET_DEVICE_OP(dev, get_hw_stats); + SET_DEVICE_OP(dev, get_link_layer); + SET_DEVICE_OP(dev, get_netdev); + SET_DEVICE_OP(dev, get_port_immutable); + SET_DEVICE_OP(dev, get_vector_affinity); + SET_DEVICE_OP(dev, get_vf_config); + SET_DEVICE_OP(dev, get_vf_stats); + SET_DEVICE_OP(dev, map_mr_sg); + SET_DEVICE_OP(dev, map_phys_fmr); + SET_DEVICE_OP(dev, mmap); + SET_DEVICE_OP(dev, modify_ah); + SET_DEVICE_OP(dev, modify_cq); + SET_DEVICE_OP(dev, modify_device); + SET_DEVICE_OP(dev, modify_flow_action_esp); + SET_DEVICE_OP(dev, modify_port); + SET_DEVICE_OP(dev, modify_qp); + SET_DEVICE_OP(dev, modify_srq); + SET_DEVICE_OP(dev, modify_wq); + SET_DEVICE_OP(dev, peek_cq); + SET_DEVICE_OP(dev, poll_cq); + SET_DEVICE_OP(dev, post_recv); + SET_DEVICE_OP(dev, post_send); + SET_DEVICE_OP(dev, post_srq_recv); + SET_DEVICE_OP(dev, process_mad); + SET_DEVICE_OP(dev, query_ah); + SET_DEVICE_OP(dev, query_device); + SET_DEVICE_OP(dev, query_gid); + SET_DEVICE_OP(dev, query_pkey); + SET_DEVICE_OP(dev, query_port); + SET_DEVICE_OP(dev, query_qp); + SET_DEVICE_OP(dev, query_srq); + SET_DEVICE_OP(dev, rdma_netdev_get_params); + SET_DEVICE_OP(dev, read_counters); + SET_DEVICE_OP(dev, reg_dm_mr); + SET_DEVICE_OP(dev, reg_user_mr); + SET_DEVICE_OP(dev, req_ncomp_notif); + SET_DEVICE_OP(dev, req_notify_cq); + SET_DEVICE_OP(dev, rereg_user_mr); + SET_DEVICE_OP(dev, resize_cq); + SET_DEVICE_OP(dev, set_vf_guid); + SET_DEVICE_OP(dev, set_vf_link_state); + SET_DEVICE_OP(dev, unmap_fmr); +} +EXPORT_SYMBOL(ib_set_device_ops); + static const struct rdma_nl_cbs ibnl_ls_cb_table[RDMA_NL_LS_NUM_OPS] = { [RDMA_NL_LS_OP_RESOLVE] = { .doit = ib_nl_handle_resolve_resp, diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 85021451eee0..3a02c72e7620 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -2257,6 +2257,249 @@ struct ib_counters_read_attr { struct uverbs_attr_bundle; +/** + * struct ib_device_ops - InfiniBand device operations + * This structure defines all the InfiniBand device operations, providers will + * need to define the supported operations, otherwise they will be set to null. + */ +struct ib_device_ops { + int (*post_send)(struct ib_qp *qp, const struct ib_send_wr *send_wr, + const struct ib_send_wr **bad_send_wr); + int (*post_recv)(struct ib_qp *qp, const struct ib_recv_wr *recv_wr, + const struct ib_recv_wr **bad_recv_wr); + void (*drain_rq)(struct ib_qp *qp); + void (*drain_sq)(struct ib_qp *qp); + int (*poll_cq)(struct ib_cq *cq, int num_entries, struct ib_wc *wc); + int (*peek_cq)(struct ib_cq *cq, int wc_cnt); + int (*req_notify_cq)(struct ib_cq *cq, enum ib_cq_notify_flags flags); + int (*req_ncomp_notif)(struct ib_cq *cq, int wc_cnt); + int (*post_srq_recv)(struct ib_srq *srq, + const struct ib_recv_wr *recv_wr, + const struct ib_recv_wr **bad_recv_wr); + int (*process_mad)(struct ib_device *device, int process_mad_flags, + u8 port_num, const struct ib_wc *in_wc, + const struct ib_grh *in_grh, + const struct ib_mad_hdr *in_mad, size_t in_mad_size, + struct ib_mad_hdr *out_mad, size_t *out_mad_size, + u16 *out_mad_pkey_index); + int (*query_device)(struct ib_device *device, + struct ib_device_attr *device_attr, + struct ib_udata *udata); + int (*modify_device)(struct ib_device *device, int device_modify_mask, + struct ib_device_modify *device_modify); + void (*get_dev_fw_str)(struct ib_device *, char *str); + const struct cpumask *(*get_vector_affinity)(struct ib_device *ibdev, + int comp_vector); + int (*query_port)(struct ib_device *device, u8 port_num, + struct ib_port_attr *port_attr); + int (*modify_port)(struct ib_device *device, u8 port_num, + int port_modify_mask, + struct ib_port_modify *port_modify); + /** + * The following mandatory functions are used only at device + * registration. Keep functions such as these at the end of this + * structure to avoid cache line misses when accessing struct ib_device + * in fast paths. + */ + int (*get_port_immutable)(struct ib_device *, u8, + struct ib_port_immutable *); + enum rdma_link_layer (*get_link_layer)(struct ib_device *device, + u8 port_num); + /** + * When calling get_netdev, the HW vendor's driver should return the + * net device of device @device at port @port_num or NULL if such + * a net device doesn't exist. The vendor driver should call dev_hold + * on this net device. The HW vendor's device driver must guarantee + * that this function returns NULL before the net device has finished + * NETDEV_UNREGISTER state. + */ + struct net_device *(*get_netdev)(struct ib_device *device, u8 port_num); + /** + * rdma netdev operation + * + * Driver implementing alloc_rdma_netdev or rdma_netdev_get_params + * must return -EOPNOTSUPP if it doesn't support the specified type. + */ + struct net_device *(*alloc_rdma_netdev)( + struct ib_device *device, + u8 port_num, + enum rdma_netdev_t type, + const char *name, + unsigned char name_assign_type, + void (*setup)(struct net_device *)); + + int (*rdma_netdev_get_params)(struct ib_device *device, u8 port_num, + enum rdma_netdev_t type, + struct rdma_netdev_alloc_params *params); + /** + * query_gid should be return GID value for @device, when @port_num + * link layer is either IB or iWarp. It is no-op if @port_num port + * is RoCE link layer. + */ + int (*query_gid)(struct ib_device *device, u8 port_num, int index, + union ib_gid *gid); + /** + * When calling add_gid, the HW vendor's driver should add the gid + * of device of port at gid index available at @attr. Meta-info of + * that gid (for example, the network device related to this gid) is + * available at @attr. @context allows the HW vendor driver to store + * extra information together with a GID entry. The HW vendor driver may + * allocate memory to contain this information and store it in @context + * when a new GID entry is written to. Params are consistent until the + * next call of add_gid or delete_gid. The function should return 0 on + * success or error otherwise. The function could be called + * concurrently for different ports. This function is only called when + * roce_gid_table is used. + */ + int (*add_gid)(const struct ib_gid_attr *attr, void **context); + /** + * When calling del_gid, the HW vendor's driver should delete the + * gid of device @device at gid index gid_index of port port_num + * available in @attr. + * Upon the deletion of a GID entry, the HW vendor must free any + * allocated memory. The caller will clear @context afterwards. + * This function is only called when roce_gid_table is used. + */ + int (*del_gid)(const struct ib_gid_attr *attr, void **context); + int (*query_pkey)(struct ib_device *device, u8 port_num, u16 index, + u16 *pkey); + struct ib_ucontext *(*alloc_ucontext)(struct ib_device *device, + struct ib_udata *udata); + int (*dealloc_ucontext)(struct ib_ucontext *context); + int (*mmap)(struct ib_ucontext *context, struct vm_area_struct *vma); + void (*disassociate_ucontext)(struct ib_ucontext *ibcontext); + struct ib_pd *(*alloc_pd)(struct ib_device *device, + struct ib_ucontext *context, + struct ib_udata *udata); + int (*dealloc_pd)(struct ib_pd *pd); + struct ib_ah *(*create_ah)(struct ib_pd *pd, + struct rdma_ah_attr *ah_attr, + struct ib_udata *udata); + int (*modify_ah)(struct ib_ah *ah, struct rdma_ah_attr *ah_attr); + int (*query_ah)(struct ib_ah *ah, struct rdma_ah_attr *ah_attr); + int (*destroy_ah)(struct ib_ah *ah); + struct ib_srq *(*create_srq)(struct ib_pd *pd, + struct ib_srq_init_attr *srq_init_attr, + struct ib_udata *udata); + int (*modify_srq)(struct ib_srq *srq, struct ib_srq_attr *srq_attr, + enum ib_srq_attr_mask srq_attr_mask, + struct ib_udata *udata); + int (*query_srq)(struct ib_srq *srq, struct ib_srq_attr *srq_attr); + int (*destroy_srq)(struct ib_srq *srq); + struct ib_qp *(*create_qp)(struct ib_pd *pd, + struct ib_qp_init_attr *qp_init_attr, + struct ib_udata *udata); + int (*modify_qp)(struct ib_qp *qp, struct ib_qp_attr *qp_attr, + int qp_attr_mask, struct ib_udata *udata); + int (*query_qp)(struct ib_qp *qp, struct ib_qp_attr *qp_attr, + int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr); + int (*destroy_qp)(struct ib_qp *qp); + struct ib_cq *(*create_cq)(struct ib_device *device, + const struct ib_cq_init_attr *attr, + struct ib_ucontext *context, + struct ib_udata *udata); + int (*modify_cq)(struct ib_cq *cq, u16 cq_count, u16 cq_period); + int (*destroy_cq)(struct ib_cq *cq); + int (*resize_cq)(struct ib_cq *cq, int cqe, struct ib_udata *udata); + struct ib_mr *(*get_dma_mr)(struct ib_pd *pd, int mr_access_flags); + struct ib_mr *(*reg_user_mr)(struct ib_pd *pd, u64 start, u64 length, + u64 virt_addr, int mr_access_flags, + struct ib_udata *udata); + int (*rereg_user_mr)(struct ib_mr *mr, int flags, u64 start, u64 length, + u64 virt_addr, int mr_access_flags, + struct ib_pd *pd, struct ib_udata *udata); + int (*dereg_mr)(struct ib_mr *mr); + struct ib_mr *(*alloc_mr)(struct ib_pd *pd, enum ib_mr_type mr_type, + u32 max_num_sg); + int (*map_mr_sg)(struct ib_mr *mr, struct scatterlist *sg, int sg_nents, + unsigned int *sg_offset); + int (*check_mr_status)(struct ib_mr *mr, u32 check_mask, + struct ib_mr_status *mr_status); + struct ib_mw *(*alloc_mw)(struct ib_pd *pd, enum ib_mw_type type, + struct ib_udata *udata); + int (*dealloc_mw)(struct ib_mw *mw); + struct ib_fmr *(*alloc_fmr)(struct ib_pd *pd, int mr_access_flags, + struct ib_fmr_attr *fmr_attr); + int (*map_phys_fmr)(struct ib_fmr *fmr, u64 *page_list, int list_len, + u64 iova); + int (*unmap_fmr)(struct list_head *fmr_list); + int (*dealloc_fmr)(struct ib_fmr *fmr); + int (*attach_mcast)(struct ib_qp *qp, union ib_gid *gid, u16 lid); + int (*detach_mcast)(struct ib_qp *qp, union ib_gid *gid, u16 lid); + struct ib_xrcd *(*alloc_xrcd)(struct ib_device *device, + struct ib_ucontext *ucontext, + struct ib_udata *udata); + int (*dealloc_xrcd)(struct ib_xrcd *xrcd); + struct ib_flow *(*create_flow)(struct ib_qp *qp, + struct ib_flow_attr *flow_attr, + int domain, struct ib_udata *udata); + int (*destroy_flow)(struct ib_flow *flow_id); + struct ib_flow_action *(*create_flow_action_esp)( + struct ib_device *device, + const struct ib_flow_action_attrs_esp *attr, + struct uverbs_attr_bundle *attrs); + int (*destroy_flow_action)(struct ib_flow_action *action); + int (*modify_flow_action_esp)( + struct ib_flow_action *action, + const struct ib_flow_action_attrs_esp *attr, + struct uverbs_attr_bundle *attrs); + int (*set_vf_link_state)(struct ib_device *device, int vf, u8 port, + int state); + int (*get_vf_config)(struct ib_device *device, int vf, u8 port, + struct ifla_vf_info *ivf); + int (*get_vf_stats)(struct ib_device *device, int vf, u8 port, + struct ifla_vf_stats *stats); + int (*set_vf_guid)(struct ib_device *device, int vf, u8 port, u64 guid, + int type); + struct ib_wq *(*create_wq)(struct ib_pd *pd, + struct ib_wq_init_attr *init_attr, + struct ib_udata *udata); + int (*destroy_wq)(struct ib_wq *wq); + int (*modify_wq)(struct ib_wq *wq, struct ib_wq_attr *attr, + u32 wq_attr_mask, struct ib_udata *udata); + struct ib_rwq_ind_table *(*create_rwq_ind_table)( + struct ib_device *device, + struct ib_rwq_ind_table_init_attr *init_attr, + struct ib_udata *udata); + int (*destroy_rwq_ind_table)(struct ib_rwq_ind_table *wq_ind_table); + struct ib_dm *(*alloc_dm)(struct ib_device *device, + struct ib_ucontext *context, + struct ib_dm_alloc_attr *attr, + struct uverbs_attr_bundle *attrs); + int (*dealloc_dm)(struct ib_dm *dm); + struct ib_mr *(*reg_dm_mr)(struct ib_pd *pd, struct ib_dm *dm, + struct ib_dm_mr_attr *attr, + struct uverbs_attr_bundle *attrs); + struct ib_counters *(*create_counters)( + struct ib_device *device, struct uverbs_attr_bundle *attrs); + int (*destroy_counters)(struct ib_counters *counters); + int (*read_counters)(struct ib_counters *counters, + struct ib_counters_read_attr *counters_read_attr, + struct uverbs_attr_bundle *attrs); + /** + * alloc_hw_stats - Allocate a struct rdma_hw_stats and fill in the + * driver initialized data. The struct is kfree()'ed by the sysfs + * core when the device is removed. A lifespan of -1 in the return + * struct tells the core to set a default lifespan. + */ + struct rdma_hw_stats *(*alloc_hw_stats)(struct ib_device *device, + u8 port_num); + /** + * get_hw_stats - Fill in the counter value(s) in the stats struct. + * @index - The index in the value array we wish to have updated, or + * num_counters if we want all stats updated + * Return codes - + * < 0 - Error, no counters updated + * index - Updated the single counter pointed to by index + * num_counters - Updated all counters (will reset the timestamp + * and prevent further calls for lifespan milliseconds) + * Drivers are allowed to update all counters in leiu of just the + * one given in index at their option + */ + int (*get_hw_stats)(struct ib_device *device, + struct rdma_hw_stats *stats, u8 port, int index); +}; + struct ib_device { /* Do not access @dma_device directly from ULP nor from HW drivers. */ struct device *dma_device; @@ -2660,6 +2903,8 @@ void ib_unregister_client(struct ib_client *client); void *ib_get_client_data(struct ib_device *device, struct ib_client *client); void ib_set_client_data(struct ib_device *device, struct ib_client *client, void *data); +void ib_set_device_ops(struct ib_device *device, + const struct ib_device_ops *ops); #if IS_ENABLED(CONFIG_INFINIBAND_USER_ACCESS) int rdma_user_mmap_io(struct ib_ucontext *ucontext, struct vm_area_struct *vma, From patchwork Mon Dec 10 19:09:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722255 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A2914679F for ; Mon, 10 Dec 2018 19:10:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 939D92A795 for ; Mon, 10 Dec 2018 19:10:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 885102AA80; Mon, 10 Dec 2018 19:10:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6F9152A84D for ; Mon, 10 Dec 2018 19:10:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727246AbeLJTKZ (ORCPT ); Mon, 10 Dec 2018 14:10:25 -0500 Received: from mail-wm1-f66.google.com ([209.85.128.66]:53723 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727062AbeLJTKY (ORCPT ); Mon, 10 Dec 2018 14:10:24 -0500 Received: by mail-wm1-f66.google.com with SMTP id y1so12067072wmi.3 for ; Mon, 10 Dec 2018 11:10:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WkFastJq94P4myeYVgtO/THlH10mneWM1sxpRJ+Vlqk=; b=l6aG6Mho7jiAr/DeEHx3D36mlkIQ2PVJerELyn7jufxIvgiXhI5a5HgZHIfDLvf6VL NQRqwvGRW5EbWiM9UrBlx/VLxV4KRHEZfr7pFoDlgiwVEA0ej+Dzn7CNxVRrr7WrCUUi ecCRRHx2afVK+JIPXG4yEOvMkSP4wBjMBDTfy6VZSLniNA/Fj1FeLWm08Q8QaQKnZLm0 5rI8chwsGCGs4MiTBTlcHOVJq/cg06VE5K8wA7ATquRGew7B5rjXj2Ay31+bPIn9Sp1Q +fEWgc1SFx79wk0d7SEEMSglNug935wXqaxa7kIH2vODrG9Hm/JCEyKVqJokRwSYvw7o rBUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WkFastJq94P4myeYVgtO/THlH10mneWM1sxpRJ+Vlqk=; b=aF5hkYcPjmmhEiqYAQ6VDc0gK2yl37IANsWdXBCW5DBkalihM2d9sEuhbbl7elaqvR QHv/5D7kW3xM81H3UODltE+Rk4jP/ewKnu2jjPflDUAJYq0xMziHeTcbUVhiTd1qxwmy Vf9Jv4L00aOvHcXSzxAFWZeIYhdt4PEu2oHYwuJw3oHQQgYH5L4Pjp+nW7R8dZ80wLbE bLFJmcHY9vF9GbtEQrJ0+P4/C1k1YAwdMxzAZCH/3Vq9edw9hXArBZ9IqKrkJnRbonA5 y4k9ZhXltfo0ep3P3ITBjXk4e3X6PN82v8ao+PuZToRCqLquy3i6jsBrbnrSl5be3EnH aLRA== X-Gm-Message-State: AA+aEWbufPDpF/KNuUpv6q8I7rZ4POpyjf2OE++CML3wUB+NW8x+ScuW duP+61kzU11RmcvZNZFy7JrLZted X-Google-Smtp-Source: AFSGD/Xq+dIAqN2pk03aLneR8X+vmxodWmS4IXyomp2oq7ohoNDwm+O7IueRjQZQbPXQle7jTsjTHg== X-Received: by 2002:a1c:58ce:: with SMTP id m197mr168826wmb.31.1544469022372; Mon, 10 Dec 2018 11:10:22 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.21 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:21 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v5 03/20] RDMA/bnxt_re: Initialize ib_device_ops struct Date: Mon, 10 Dec 2018 21:09:31 +0200 Message-Id: <20181210190948.6892-4-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/bnxt_re/main.c | 96 +++++++++++++--------------- 1 file changed, 45 insertions(+), 51 deletions(-) diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c index cf2282654210..ef4d7afb3c5b 100644 --- a/drivers/infiniband/hw/bnxt_re/main.c +++ b/drivers/infiniband/hw/bnxt_re/main.c @@ -568,6 +568,50 @@ static void bnxt_re_unregister_ib(struct bnxt_re_dev *rdev) ib_unregister_device(&rdev->ibdev); } +static const struct ib_device_ops bnxt_re_dev_ops = { + .add_gid = bnxt_re_add_gid, + .alloc_hw_stats = bnxt_re_ib_alloc_hw_stats, + .alloc_mr = bnxt_re_alloc_mr, + .alloc_pd = bnxt_re_alloc_pd, + .alloc_ucontext = bnxt_re_alloc_ucontext, + .create_ah = bnxt_re_create_ah, + .create_cq = bnxt_re_create_cq, + .create_qp = bnxt_re_create_qp, + .create_srq = bnxt_re_create_srq, + .dealloc_pd = bnxt_re_dealloc_pd, + .dealloc_ucontext = bnxt_re_dealloc_ucontext, + .del_gid = bnxt_re_del_gid, + .dereg_mr = bnxt_re_dereg_mr, + .destroy_ah = bnxt_re_destroy_ah, + .destroy_cq = bnxt_re_destroy_cq, + .destroy_qp = bnxt_re_destroy_qp, + .destroy_srq = bnxt_re_destroy_srq, + .get_dev_fw_str = bnxt_re_query_fw_str, + .get_dma_mr = bnxt_re_get_dma_mr, + .get_hw_stats = bnxt_re_ib_get_hw_stats, + .get_link_layer = bnxt_re_get_link_layer, + .get_netdev = bnxt_re_get_netdev, + .get_port_immutable = bnxt_re_get_port_immutable, + .map_mr_sg = bnxt_re_map_mr_sg, + .mmap = bnxt_re_mmap, + .modify_ah = bnxt_re_modify_ah, + .modify_device = bnxt_re_modify_device, + .modify_qp = bnxt_re_modify_qp, + .modify_srq = bnxt_re_modify_srq, + .poll_cq = bnxt_re_poll_cq, + .post_recv = bnxt_re_post_recv, + .post_send = bnxt_re_post_send, + .post_srq_recv = bnxt_re_post_srq_recv, + .query_ah = bnxt_re_query_ah, + .query_device = bnxt_re_query_device, + .query_pkey = bnxt_re_query_pkey, + .query_port = bnxt_re_query_port, + .query_qp = bnxt_re_query_qp, + .query_srq = bnxt_re_query_srq, + .reg_user_mr = bnxt_re_reg_user_mr, + .req_notify_cq = bnxt_re_req_notify_cq, +}; + static int bnxt_re_register_ib(struct bnxt_re_dev *rdev) { struct ib_device *ibdev = &rdev->ibdev; @@ -614,60 +658,10 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev) (1ull << IB_USER_VERBS_CMD_DESTROY_AH); /* POLL_CQ and REQ_NOTIFY_CQ is directly handled in libbnxt_re */ - /* Kernel verbs */ - ibdev->query_device = bnxt_re_query_device; - ibdev->modify_device = bnxt_re_modify_device; - - ibdev->query_port = bnxt_re_query_port; - ibdev->get_port_immutable = bnxt_re_get_port_immutable; - ibdev->get_dev_fw_str = bnxt_re_query_fw_str; - ibdev->query_pkey = bnxt_re_query_pkey; - ibdev->get_netdev = bnxt_re_get_netdev; - ibdev->add_gid = bnxt_re_add_gid; - ibdev->del_gid = bnxt_re_del_gid; - ibdev->get_link_layer = bnxt_re_get_link_layer; - - ibdev->alloc_pd = bnxt_re_alloc_pd; - ibdev->dealloc_pd = bnxt_re_dealloc_pd; - - ibdev->create_ah = bnxt_re_create_ah; - ibdev->modify_ah = bnxt_re_modify_ah; - ibdev->query_ah = bnxt_re_query_ah; - ibdev->destroy_ah = bnxt_re_destroy_ah; - - ibdev->create_srq = bnxt_re_create_srq; - ibdev->modify_srq = bnxt_re_modify_srq; - ibdev->query_srq = bnxt_re_query_srq; - ibdev->destroy_srq = bnxt_re_destroy_srq; - ibdev->post_srq_recv = bnxt_re_post_srq_recv; - - ibdev->create_qp = bnxt_re_create_qp; - ibdev->modify_qp = bnxt_re_modify_qp; - ibdev->query_qp = bnxt_re_query_qp; - ibdev->destroy_qp = bnxt_re_destroy_qp; - - ibdev->post_send = bnxt_re_post_send; - ibdev->post_recv = bnxt_re_post_recv; - - ibdev->create_cq = bnxt_re_create_cq; - ibdev->destroy_cq = bnxt_re_destroy_cq; - ibdev->poll_cq = bnxt_re_poll_cq; - ibdev->req_notify_cq = bnxt_re_req_notify_cq; - - ibdev->get_dma_mr = bnxt_re_get_dma_mr; - ibdev->dereg_mr = bnxt_re_dereg_mr; - ibdev->alloc_mr = bnxt_re_alloc_mr; - ibdev->map_mr_sg = bnxt_re_map_mr_sg; - - ibdev->reg_user_mr = bnxt_re_reg_user_mr; - ibdev->alloc_ucontext = bnxt_re_alloc_ucontext; - ibdev->dealloc_ucontext = bnxt_re_dealloc_ucontext; - ibdev->mmap = bnxt_re_mmap; - ibdev->get_hw_stats = bnxt_re_ib_get_hw_stats; - ibdev->alloc_hw_stats = bnxt_re_ib_alloc_hw_stats; rdma_set_device_sysfs_group(ibdev, &bnxt_re_dev_attr_group); ibdev->driver_id = RDMA_DRIVER_BNXT_RE; + ib_set_device_ops(ibdev, &bnxt_re_dev_ops); return ib_register_device(ibdev, "bnxt_re%d", NULL); } From patchwork Mon Dec 10 19:09:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722257 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BDCCE18E8 for ; Mon, 10 Dec 2018 19:10:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AE8102A954 for ; Mon, 10 Dec 2018 19:10:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A297E2A795; Mon, 10 Dec 2018 19:10:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 350432A705 for ; Mon, 10 Dec 2018 19:10:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728827AbeLJTK1 (ORCPT ); Mon, 10 Dec 2018 14:10:27 -0500 Received: from mail-wm1-f66.google.com ([209.85.128.66]:40714 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727062AbeLJTK0 (ORCPT ); Mon, 10 Dec 2018 14:10:26 -0500 Received: by mail-wm1-f66.google.com with SMTP id q26so12420439wmf.5 for ; Mon, 10 Dec 2018 11:10:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kY2gPJODypGY4jMytWCyG9INT0WgNmHaL6S7ELQTPlE=; b=AKHZNkC+knzqaHHhYne6vxpnTVMRKQS8ziOosw0Z4ghzu10akkFz+t10GIhNe01bg3 YxoC9wAfHk2s/d3kwCaZgbyxPfXilC507UFqkHQ+AYA8xVBcEuaWEnv1igcfJgrE2LmU io1hdw/NmSDaTO9qaWGJF6TxbBPseJKpbf/qgKUAEoLUudp04L89yTs1912hfyGZWAO1 dbFRPZrkn6BH98PqOzdYtI3l3GxEe03KXbm8TLuT4KZsrV1aEyRseXl+D51HDCcsg55o tDBuVj9p7vCC5l0Dsc89OdFhP8isEBCzXFhtzGzVjtE9HeKIk6r6oxApnBZXHOOpDBD+ HHxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kY2gPJODypGY4jMytWCyG9INT0WgNmHaL6S7ELQTPlE=; b=YSfEj8xvFTgnuupv30vllBrbyXqIt11VqMrNV5D+3m35Ecf2kRq44Ai4W9yFv2s03l BuFL8Z/6w6eAalBwyJOc7q0LJEF1vjJjlC3JuxOm42Uqv9m2BfQeeYy81l9pSWRQfpRt qXrpsjyx45jwDxQkC9CQ/duUhk6W+rx4Q0aM0/EQZqe6pD2MNA42M8euVt2szzDFXcMG TPZWmNSdvU4cyhIVcqUrgRDtcWmUVLg6nUGhR+aVBz98tmRBRbshQq+CQC4gBG+3vkWV K8ShsPQdt12uBWD2uiOrZw8tO4IArMu2s8G9ObhLHapGVhkmk9daBIutUXVax3v3etrK tUfw== X-Gm-Message-State: AA+aEWYb3pSXXZedjeW76lENNe2LIygVRiuE2dK1J7gFxCRUW+NW3wyH pzO7eINVBqHofnMM35PupFM= X-Google-Smtp-Source: AFSGD/Vscpf7FPNtH1xsDn/NcwHwdmaQv2hawa2m7R0hPH/O456Sbsf2DrswJYbyVKBRx8Un3LJlog== X-Received: by 2002:a1c:4e08:: with SMTP id g8mr11369147wmh.46.1544469023771; Mon, 10 Dec 2018 11:10:23 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.22 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:23 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v5 04/20] RDMA/cxgb3: Initialize ib_device_ops struct Date: Mon, 10 Dec 2018 21:09:32 +0200 Message-Id: <20181210190948.6892-5-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/cxgb3/iwch_provider.c | 64 +++++++++++---------- 1 file changed, 34 insertions(+), 30 deletions(-) diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c index ebbec02cebe0..7a1dc83ba588 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_provider.c +++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c @@ -1317,6 +1317,39 @@ static void get_dev_fw_ver_str(struct ib_device *ibdev, char *str) snprintf(str, IB_FW_VERSION_NAME_MAX, "%s", info.fw_version); } +static const struct ib_device_ops iwch_dev_ops = { + .alloc_hw_stats = iwch_alloc_stats, + .alloc_mr = iwch_alloc_mr, + .alloc_mw = iwch_alloc_mw, + .alloc_pd = iwch_allocate_pd, + .alloc_ucontext = iwch_alloc_ucontext, + .create_cq = iwch_create_cq, + .create_qp = iwch_create_qp, + .dealloc_mw = iwch_dealloc_mw, + .dealloc_pd = iwch_deallocate_pd, + .dealloc_ucontext = iwch_dealloc_ucontext, + .dereg_mr = iwch_dereg_mr, + .destroy_cq = iwch_destroy_cq, + .destroy_qp = iwch_destroy_qp, + .get_dev_fw_str = get_dev_fw_ver_str, + .get_dma_mr = iwch_get_dma_mr, + .get_hw_stats = iwch_get_mib, + .get_port_immutable = iwch_port_immutable, + .map_mr_sg = iwch_map_mr_sg, + .mmap = iwch_mmap, + .modify_qp = iwch_ib_modify_qp, + .poll_cq = iwch_poll_cq, + .post_recv = iwch_post_receive, + .post_send = iwch_post_send, + .query_device = iwch_query_device, + .query_gid = iwch_query_gid, + .query_pkey = iwch_query_pkey, + .query_port = iwch_query_port, + .reg_user_mr = iwch_reg_user_mr, + .req_notify_cq = iwch_arm_cq, + .resize_cq = iwch_resize_cq, +}; + int iwch_register_device(struct iwch_dev *dev) { int ret; @@ -1356,37 +1389,7 @@ int iwch_register_device(struct iwch_dev *dev) dev->ibdev.phys_port_cnt = dev->rdev.port_info.nports; dev->ibdev.num_comp_vectors = 1; dev->ibdev.dev.parent = &dev->rdev.rnic_info.pdev->dev; - dev->ibdev.query_device = iwch_query_device; - dev->ibdev.query_port = iwch_query_port; - dev->ibdev.query_pkey = iwch_query_pkey; - dev->ibdev.query_gid = iwch_query_gid; - dev->ibdev.alloc_ucontext = iwch_alloc_ucontext; - dev->ibdev.dealloc_ucontext = iwch_dealloc_ucontext; - dev->ibdev.mmap = iwch_mmap; - dev->ibdev.alloc_pd = iwch_allocate_pd; - dev->ibdev.dealloc_pd = iwch_deallocate_pd; - dev->ibdev.create_qp = iwch_create_qp; - dev->ibdev.modify_qp = iwch_ib_modify_qp; - dev->ibdev.destroy_qp = iwch_destroy_qp; - dev->ibdev.create_cq = iwch_create_cq; - dev->ibdev.destroy_cq = iwch_destroy_cq; - dev->ibdev.resize_cq = iwch_resize_cq; - dev->ibdev.poll_cq = iwch_poll_cq; - dev->ibdev.get_dma_mr = iwch_get_dma_mr; - dev->ibdev.reg_user_mr = iwch_reg_user_mr; - dev->ibdev.dereg_mr = iwch_dereg_mr; - dev->ibdev.alloc_mw = iwch_alloc_mw; - dev->ibdev.dealloc_mw = iwch_dealloc_mw; - dev->ibdev.alloc_mr = iwch_alloc_mr; - dev->ibdev.map_mr_sg = iwch_map_mr_sg; - dev->ibdev.req_notify_cq = iwch_arm_cq; - dev->ibdev.post_send = iwch_post_send; - dev->ibdev.post_recv = iwch_post_receive; - dev->ibdev.alloc_hw_stats = iwch_alloc_stats; - dev->ibdev.get_hw_stats = iwch_get_mib; dev->ibdev.uverbs_abi_ver = IWCH_UVERBS_ABI_VERSION; - dev->ibdev.get_port_immutable = iwch_port_immutable; - dev->ibdev.get_dev_fw_str = get_dev_fw_ver_str; dev->ibdev.iwcm = kmalloc(sizeof(struct iw_cm_verbs), GFP_KERNEL); if (!dev->ibdev.iwcm) @@ -1405,6 +1408,7 @@ int iwch_register_device(struct iwch_dev *dev) dev->ibdev.driver_id = RDMA_DRIVER_CXGB3; rdma_set_device_sysfs_group(&dev->ibdev, &iwch_attr_group); + ib_set_device_ops(&dev->ibdev, &iwch_dev_ops); ret = ib_register_device(&dev->ibdev, "cxgb3_%d", NULL); if (ret) kfree(dev->ibdev.iwcm); From patchwork Mon Dec 10 19:09:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722259 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6B61D18E8 for ; Mon, 10 Dec 2018 19:10:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5BD132A78E for ; Mon, 10 Dec 2018 19:10:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4FF532A6B1; Mon, 10 Dec 2018 19:10:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D47EF2A7D4 for ; Mon, 10 Dec 2018 19:10:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728940AbeLJTK2 (ORCPT ); Mon, 10 Dec 2018 14:10:28 -0500 Received: from mail-wm1-f66.google.com ([209.85.128.66]:37194 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728786AbeLJTK2 (ORCPT ); Mon, 10 Dec 2018 14:10:28 -0500 Received: by mail-wm1-f66.google.com with SMTP id g67so12428983wmd.2 for ; Mon, 10 Dec 2018 11:10:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xq+R6FHxVF3aUAtcvtYjfGKYk8TVdg4Tb7XgXjo2Tmc=; b=PPXw0xFcfaaaFG9bDqGhtTt3t2GleVK8ekMrgLltR2dJjM58AGYQ/p4iS3z3A9v0K7 ODq3mHFCAXXi5gJ4OOnPTf8nFKh4CREiX+/VNzoq19/li2QxhOt7IN3GRflA0dUxg+O/ W9yqORYFerrwcCLKQa2tpp21swhD5fTEA3IAanFJ+f8Ydg/yCuJJp0G5B54uJGoDNXKu RUtb985qFTorbhVESnn+8+licMvAU2kzc3afX2gKG14i+ttX4vRVdHex6RQbikecyhPV W1UDCLvu2q56h25bE5/1c1TEUmDlFtF/8i5sNF6OC6jQO4gRqlHYCEFdWZnm+fPf4Eeg n8Mw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xq+R6FHxVF3aUAtcvtYjfGKYk8TVdg4Tb7XgXjo2Tmc=; b=g/j3r3ABNYlNmOpLgqiiuLzdOjsXkV5fmpC79pn1OOn6EFxDMP2yGPIhZEblhAOL8z x1eBgJjTmAEk6lezDcpInGkA9LRE6X9IDsOvHyjAJ4d4f/st5rkB1Ode2IdAkQwZiTzO ZCs0IraHVEvi9+XQZIcbRsW0NUQ7jwd2SVqIzz9Tu8S5jbfhdG1dYy97ogdJGxbjBi9q mvEOZT7fOJHpfeA/AkrTTTWYE1Q0v+/9dBG/Ewa0IWmhBSgRnbSSqH/0/rZ4OxftL2RG eSskaFE9Oc+lVgROn7VqwN7zGTu2l3/El7/Dh+f1CdzLBK8Jx4M9++RbRGd0S9xUEw3c ifZQ== X-Gm-Message-State: AA+aEWYdWYNHTUTM30oXgpm1tCs716tVBXyv6cZv2wAEyCjjwOdQrRdX qC+3DVme0GIfK49U4Y2V5T0= X-Google-Smtp-Source: AFSGD/Wtn7w/DTwzDrfDhNIx3t7RcMCTA2cKIcrnLX94pcCAogPA1GOZ+B2brty+0zZsnnFnk4xVew== X-Received: by 2002:a1c:5585:: with SMTP id j127-v6mr10921227wmb.127.1544469024941; Mon, 10 Dec 2018 11:10:24 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.23 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:24 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v5 05/20] RDMA/cxgb4: Initialize ib_device_ops struct Date: Mon, 10 Dec 2018 21:09:33 +0200 Message-Id: <20181210190948.6892-6-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/cxgb4/provider.c | 74 ++++++++++++++------------ 1 file changed, 39 insertions(+), 35 deletions(-) diff --git a/drivers/infiniband/hw/cxgb4/provider.c b/drivers/infiniband/hw/cxgb4/provider.c index cbb3c0ddd990..586b0c37481f 100644 --- a/drivers/infiniband/hw/cxgb4/provider.c +++ b/drivers/infiniband/hw/cxgb4/provider.c @@ -531,6 +531,44 @@ static int fill_res_entry(struct sk_buff *msg, struct rdma_restrack_entry *res) c4iw_restrack_funcs[res->type](msg, res) : 0; } +static const struct ib_device_ops c4iw_dev_ops = { + .alloc_hw_stats = c4iw_alloc_stats, + .alloc_mr = c4iw_alloc_mr, + .alloc_mw = c4iw_alloc_mw, + .alloc_pd = c4iw_allocate_pd, + .alloc_ucontext = c4iw_alloc_ucontext, + .create_cq = c4iw_create_cq, + .create_qp = c4iw_create_qp, + .create_srq = c4iw_create_srq, + .dealloc_mw = c4iw_dealloc_mw, + .dealloc_pd = c4iw_deallocate_pd, + .dealloc_ucontext = c4iw_dealloc_ucontext, + .dereg_mr = c4iw_dereg_mr, + .destroy_cq = c4iw_destroy_cq, + .destroy_qp = c4iw_destroy_qp, + .destroy_srq = c4iw_destroy_srq, + .get_dev_fw_str = get_dev_fw_str, + .get_dma_mr = c4iw_get_dma_mr, + .get_hw_stats = c4iw_get_mib, + .get_netdev = get_netdev, + .get_port_immutable = c4iw_port_immutable, + .map_mr_sg = c4iw_map_mr_sg, + .mmap = c4iw_mmap, + .modify_qp = c4iw_ib_modify_qp, + .modify_srq = c4iw_modify_srq, + .poll_cq = c4iw_poll_cq, + .post_recv = c4iw_post_receive, + .post_send = c4iw_post_send, + .post_srq_recv = c4iw_post_srq_recv, + .query_device = c4iw_query_device, + .query_gid = c4iw_query_gid, + .query_pkey = c4iw_query_pkey, + .query_port = c4iw_query_port, + .query_qp = c4iw_ib_query_qp, + .reg_user_mr = c4iw_reg_user_mr, + .req_notify_cq = c4iw_arm_cq, +}; + void c4iw_register_device(struct work_struct *work) { int ret; @@ -573,42 +611,7 @@ void c4iw_register_device(struct work_struct *work) dev->ibdev.phys_port_cnt = dev->rdev.lldi.nports; dev->ibdev.num_comp_vectors = dev->rdev.lldi.nciq; dev->ibdev.dev.parent = &dev->rdev.lldi.pdev->dev; - dev->ibdev.query_device = c4iw_query_device; - dev->ibdev.query_port = c4iw_query_port; - dev->ibdev.query_pkey = c4iw_query_pkey; - dev->ibdev.query_gid = c4iw_query_gid; - dev->ibdev.alloc_ucontext = c4iw_alloc_ucontext; - dev->ibdev.dealloc_ucontext = c4iw_dealloc_ucontext; - dev->ibdev.mmap = c4iw_mmap; - dev->ibdev.alloc_pd = c4iw_allocate_pd; - dev->ibdev.dealloc_pd = c4iw_deallocate_pd; - dev->ibdev.create_qp = c4iw_create_qp; - dev->ibdev.modify_qp = c4iw_ib_modify_qp; - dev->ibdev.query_qp = c4iw_ib_query_qp; - dev->ibdev.destroy_qp = c4iw_destroy_qp; - dev->ibdev.create_srq = c4iw_create_srq; - dev->ibdev.modify_srq = c4iw_modify_srq; - dev->ibdev.destroy_srq = c4iw_destroy_srq; - dev->ibdev.create_cq = c4iw_create_cq; - dev->ibdev.destroy_cq = c4iw_destroy_cq; - dev->ibdev.poll_cq = c4iw_poll_cq; - dev->ibdev.get_dma_mr = c4iw_get_dma_mr; - dev->ibdev.reg_user_mr = c4iw_reg_user_mr; - dev->ibdev.dereg_mr = c4iw_dereg_mr; - dev->ibdev.alloc_mw = c4iw_alloc_mw; - dev->ibdev.dealloc_mw = c4iw_dealloc_mw; - dev->ibdev.alloc_mr = c4iw_alloc_mr; - dev->ibdev.map_mr_sg = c4iw_map_mr_sg; - dev->ibdev.req_notify_cq = c4iw_arm_cq; - dev->ibdev.post_send = c4iw_post_send; - dev->ibdev.post_recv = c4iw_post_receive; - dev->ibdev.post_srq_recv = c4iw_post_srq_recv; - dev->ibdev.alloc_hw_stats = c4iw_alloc_stats; - dev->ibdev.get_hw_stats = c4iw_get_mib; dev->ibdev.uverbs_abi_ver = C4IW_UVERBS_ABI_VERSION; - dev->ibdev.get_port_immutable = c4iw_port_immutable; - dev->ibdev.get_dev_fw_str = get_dev_fw_str; - dev->ibdev.get_netdev = get_netdev; dev->ibdev.iwcm = kmalloc(sizeof(struct iw_cm_verbs), GFP_KERNEL); if (!dev->ibdev.iwcm) { @@ -630,6 +633,7 @@ void c4iw_register_device(struct work_struct *work) rdma_set_device_sysfs_group(&dev->ibdev, &c4iw_attr_group); dev->ibdev.driver_id = RDMA_DRIVER_CXGB4; + ib_set_device_ops(&dev->ibdev, &c4iw_dev_ops); ret = ib_register_device(&dev->ibdev, "cxgb4_%d", NULL); if (ret) goto err_kfree_iwcm; From patchwork Mon Dec 10 19:09:34 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722261 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DAB8E18A7 for ; Mon, 10 Dec 2018 19:10:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CB6842A6B1 for ; Mon, 10 Dec 2018 19:10:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BF48C2A779; Mon, 10 Dec 2018 19:10:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6E31C2A6B1 for ; Mon, 10 Dec 2018 19:10:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727770AbeLJTK2 (ORCPT ); Mon, 10 Dec 2018 14:10:28 -0500 Received: from mail-wm1-f67.google.com ([209.85.128.67]:50471 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728813AbeLJTK2 (ORCPT ); Mon, 10 Dec 2018 14:10:28 -0500 Received: by mail-wm1-f67.google.com with SMTP id n190so12188089wmd.0 for ; Mon, 10 Dec 2018 11:10:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GsGL/d5xvOi07yXDF1bZ3pzkFP4JXaHlMpKFDodv9oE=; b=F27e1oJ6NF7bl7sZYHUZ1yaKJc8+ntmlpw14Om2LAVtmbnNGxlr/aR5j/lF0ou/l0x r+N3kb4S/vHh5jd81dPz1l5XNKdJe63YNiNAADbZAOp9JlNu4HProESqBKYaBPBSn3L/ hJWNU1+MpHeA6ucsIag1Bx2GHjUD0XexS7mjsseKHjahlIn6W/Y6jE7KKv+qLpSqcGYl OM8OZj50SHMPaRY6lfXs/Q2vYB8Y4LW0b+q1EkmXYbK38TgvGk6hqmx1Gz8gNpl0JZ37 ropCK3K0jDt1YxiqZWjxPVxeeJC0ARNBGKLWBvhh0X/0OQ8VahmmuF+zeot2s1/gVaEg Pd8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GsGL/d5xvOi07yXDF1bZ3pzkFP4JXaHlMpKFDodv9oE=; b=QxzzKy9I0HENYPcj0Xvfof6UQ+Ekb0C733cdfOg+kLg/qNitJzYgm7aZvZHLLAX3H1 GuvyGrplsFHGq0ka3kICD7yOBAOgOGRQBVc1QBfahR18HzY8m1l+qkoKpDc84VTl4ASH WURYJq93GZBLIzto0A1CiB0G99bl5AfyA9oI5RgX8/cCaY1E2lwugy+W3q0czsk+IDFy v77VNLhriaffQLR0eC3Zva6HIYxNzcAFtSbkYYYIsDj1U0ESYxxuRHBE/ZuoYg7FeF3a NXmEi0gf9WBl05UA8MytoQWUKGBXJ5stBjn1PrdfJYl8XbSxjKJS0MA89hjtzvSE502w MT8g== X-Gm-Message-State: AA+aEWYB/B6k//43qnqBJ9MmK8VIdaXsJDV1M0ppXF2vur3N9jycbBKY 512URCg0pstNcfYsz0eHO72Fi2Qo X-Google-Smtp-Source: AFSGD/UdDT46bqcBXP+BupfKYWMo2moxIxVsyFITZHWyqRP5XzdCVr/zwBZEjsrep0S7UAovEUiXDg== X-Received: by 2002:a1c:5d4f:: with SMTP id r76-v6mr11786167wmb.25.1544469026107; Mon, 10 Dec 2018 11:10:26 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.25 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:25 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v5 06/20] RDMA/hfi1: Initialize ib_device_ops struct Date: Mon, 10 Dec 2018 21:09:34 +0200 Message-Id: <20181210190948.6892-7-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/hfi1/verbs.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c index 48e11e510358..b861bc1d5414 100644 --- a/drivers/infiniband/hw/hfi1/verbs.c +++ b/drivers/infiniband/hw/hfi1/verbs.c @@ -1616,6 +1616,16 @@ static int get_hw_stats(struct ib_device *ibdev, struct rdma_hw_stats *stats, return count; } +static const struct ib_device_ops hfi1_dev_ops = { + .alloc_hw_stats = alloc_hw_stats, + .alloc_rdma_netdev = hfi1_vnic_alloc_rn, + .get_dev_fw_str = hfi1_get_dev_fw_str, + .get_hw_stats = get_hw_stats, + .modify_device = modify_device, + /* keep process mad in the driver */ + .process_mad = hfi1_process_mad, +}; + /** * hfi1_register_ib_device - register our device with the infiniband core * @dd: the device data structure @@ -1659,14 +1669,8 @@ int hfi1_register_ib_device(struct hfi1_devdata *dd) ibdev->owner = THIS_MODULE; ibdev->phys_port_cnt = dd->num_pports; ibdev->dev.parent = &dd->pcidev->dev; - ibdev->modify_device = modify_device; - ibdev->alloc_hw_stats = alloc_hw_stats; - ibdev->get_hw_stats = get_hw_stats; - ibdev->alloc_rdma_netdev = hfi1_vnic_alloc_rn; - /* keep process mad in the driver */ - ibdev->process_mad = hfi1_process_mad; - ibdev->get_dev_fw_str = hfi1_get_dev_fw_str; + ib_set_device_ops(ibdev, &hfi1_dev_ops); strlcpy(ibdev->node_desc, init_utsname()->nodename, sizeof(ibdev->node_desc)); From patchwork Mon Dec 10 19:09:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722263 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EB96518E8 for ; Mon, 10 Dec 2018 19:10:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D8B702A779 for ; Mon, 10 Dec 2018 19:10:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CCF812A7EE; Mon, 10 Dec 2018 19:10:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1ACFD2A779 for ; Mon, 10 Dec 2018 19:10:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729244AbeLJTKb (ORCPT ); Mon, 10 Dec 2018 14:10:31 -0500 Received: from mail-wr1-f66.google.com ([209.85.221.66]:40927 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727663AbeLJTKa (ORCPT ); Mon, 10 Dec 2018 14:10:30 -0500 Received: by mail-wr1-f66.google.com with SMTP id p4so11631669wrt.7 for ; Mon, 10 Dec 2018 11:10:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nACwc6EfRfL19SnL3eIEA7nhT3MiudOlEL5PWnmM0Ik=; b=EQHGDnuyxu2lpvN9WM3VV4AANZzr4KVaPgF6LnokRXPBfOmXrDejg1e6QhK81ZZBpq TYK+EAiQ2xa1NCBg9RZ0c49SWKGwBHPp2hjJaupNOAOcPDhKA8hR54AhXcO6n7hH/cF4 W53hT+4kx839+T6NO8jv3D6EtOSwa9FrPstSK2FUwz+IJV2dQR0i3MXjRSipm8QspJ6I KtbFmrPqahLpIbpBz7qhapOJXEEcpAEu1mmtmsU8hMnL2ta8y7pJJtu6eU6CCMsUD2So 9uHz1v+R9Ker13p8E7U4FNkZjfhptdskSCGbc6S4gp30ERl1lMta2cgqdBw6+lDydSOI wjWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nACwc6EfRfL19SnL3eIEA7nhT3MiudOlEL5PWnmM0Ik=; b=PdfVxWR/dgFLEafPZvwCxoiEgXNOki7OqM2YbBfseHbYOGVz3Jsf0ZLOWzNtTfhVWs NAM8U5FF0YCjmdwX7RWEIPSxRHFb6JGRtChkK3EDbIguu2E5ogK9OmkmVpuz+rawGjF0 I5IddFGIN7LPxiiNfKTViVL4pyZsLKTcs6s8MmC4Ebz+NzzgwyOrZ/olR5SWferC47wi fF2i+WkrJFDwd/ZurGHcVZNxISyDmj/ZtUwayQ1CaSW9ShbATopJWyiHy4zO/3J+XW4b 56UX6GtHm8NMNWrafjnFqTms15i8Jwg5OCGPfVpqJ7k4sRGS1nbV9QTaawrldcUINSwu xWkw== X-Gm-Message-State: AA+aEWZ2jzUTI0PXNQ8fhBiOytNq9vRicChIaXwpZtUYlJidcsF9KmON wrjPK4NNv/fiY5tNfPVik3M= X-Google-Smtp-Source: AFSGD/XPzNcGWXGx26qYoPOTZlLNPuMh5Y8bhrigNPXp61xF8CMGN/lGsef6h1kdM2kS024FrKSJlg== X-Received: by 2002:adf:e78f:: with SMTP id n15mr11309682wrm.115.1544469027392; Mon, 10 Dec 2018 11:10:27 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.26 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:26 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v5 07/20] RDMA/hns: Initialize ib_device_ops struct Date: Mon, 10 Dec 2018 21:09:35 +0200 Message-Id: <20181210190948.6892-8-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/hns/hns_roce_device.h | 2 + drivers/infiniband/hw/hns/hns_roce_hw_v1.c | 11 ++ drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 18 ++++ drivers/infiniband/hw/hns/hns_roce_main.c | 114 ++++++++++---------- 4 files changed, 87 insertions(+), 58 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h index 779dd4c409cb..67609cc6a45e 100644 --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h @@ -883,6 +883,8 @@ struct hns_roce_hw { int (*query_srq)(struct ib_srq *ibsrq, struct ib_srq_attr *attr); int (*post_srq_recv)(struct ib_srq *ibsrq, const struct ib_recv_wr *wr, const struct ib_recv_wr **bad_wr); + const struct ib_device_ops *hns_roce_dev_ops; + const struct ib_device_ops *hns_roce_dev_srq_ops; }; struct hns_roce_dev { diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c index ca05810c92dc..d17a7ce3c93a 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c @@ -4793,6 +4793,16 @@ static void hns_roce_v1_cleanup_eq_table(struct hns_roce_dev *hr_dev) kfree(eq_table->eq); } +static const struct ib_device_ops hns_roce_v1_dev_ops = { + .destroy_qp = hns_roce_v1_destroy_qp, + .modify_cq = hns_roce_v1_modify_cq, + .poll_cq = hns_roce_v1_poll_cq, + .post_recv = hns_roce_v1_post_recv, + .post_send = hns_roce_v1_post_send, + .query_qp = hns_roce_v1_query_qp, + .req_notify_cq = hns_roce_v1_req_notify_cq, +}; + static const struct hns_roce_hw hns_roce_hw_v1 = { .reset = hns_roce_v1_reset, .hw_profile = hns_roce_v1_profile, @@ -4818,6 +4828,7 @@ static const struct hns_roce_hw hns_roce_hw_v1 = { .destroy_cq = hns_roce_v1_destroy_cq, .init_eq = hns_roce_v1_init_eq_table, .cleanup_eq = hns_roce_v1_cleanup_eq_table, + .hns_roce_dev_ops = &hns_roce_v1_dev_ops, }; static const struct of_device_id hns_roce_of_match[] = { diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 835b78371294..45815bd703b7 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -5687,6 +5687,22 @@ static int hns_roce_v2_post_srq_recv(struct ib_srq *ibsrq, return ret; } +static const struct ib_device_ops hns_roce_v2_dev_ops = { + .destroy_qp = hns_roce_v2_destroy_qp, + .modify_cq = hns_roce_v2_modify_cq, + .poll_cq = hns_roce_v2_poll_cq, + .post_recv = hns_roce_v2_post_recv, + .post_send = hns_roce_v2_post_send, + .query_qp = hns_roce_v2_query_qp, + .req_notify_cq = hns_roce_v2_req_notify_cq, +}; + +static const struct ib_device_ops hns_roce_v2_dev_srq_ops = { + .modify_srq = hns_roce_v2_modify_srq, + .post_srq_recv = hns_roce_v2_post_srq_recv, + .query_srq = hns_roce_v2_query_srq, +}; + static const struct hns_roce_hw hns_roce_hw_v2 = { .cmq_init = hns_roce_v2_cmq_init, .cmq_exit = hns_roce_v2_cmq_exit, @@ -5718,6 +5734,8 @@ static const struct hns_roce_hw hns_roce_hw_v2 = { .modify_srq = hns_roce_v2_modify_srq, .query_srq = hns_roce_v2_query_srq, .post_srq_recv = hns_roce_v2_post_srq_recv, + .hns_roce_dev_ops = &hns_roce_v2_dev_ops, + .hns_roce_dev_srq_ops = &hns_roce_v2_dev_srq_ops, }; static const struct pci_device_id hns_roce_hw_v2_pci_tbl[] = { diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c index 65ba43cee810..c79054ba9495 100644 --- a/drivers/infiniband/hw/hns/hns_roce_main.c +++ b/drivers/infiniband/hw/hns/hns_roce_main.c @@ -445,6 +445,54 @@ static void hns_roce_unregister_device(struct hns_roce_dev *hr_dev) ib_unregister_device(&hr_dev->ib_dev); } +static const struct ib_device_ops hns_roce_dev_ops = { + .add_gid = hns_roce_add_gid, + .alloc_pd = hns_roce_alloc_pd, + .alloc_ucontext = hns_roce_alloc_ucontext, + .create_ah = hns_roce_create_ah, + .create_cq = hns_roce_ib_create_cq, + .create_qp = hns_roce_create_qp, + .dealloc_pd = hns_roce_dealloc_pd, + .dealloc_ucontext = hns_roce_dealloc_ucontext, + .del_gid = hns_roce_del_gid, + .dereg_mr = hns_roce_dereg_mr, + .destroy_ah = hns_roce_destroy_ah, + .destroy_cq = hns_roce_ib_destroy_cq, + .disassociate_ucontext = hns_roce_disassociate_ucontext, + .get_dma_mr = hns_roce_get_dma_mr, + .get_link_layer = hns_roce_get_link_layer, + .get_netdev = hns_roce_get_netdev, + .get_port_immutable = hns_roce_port_immutable, + .mmap = hns_roce_mmap, + .modify_device = hns_roce_modify_device, + .modify_port = hns_roce_modify_port, + .modify_qp = hns_roce_modify_qp, + .query_ah = hns_roce_query_ah, + .query_device = hns_roce_query_device, + .query_pkey = hns_roce_query_pkey, + .query_port = hns_roce_query_port, + .reg_user_mr = hns_roce_reg_user_mr, +}; + +static const struct ib_device_ops hns_roce_dev_mr_ops = { + .rereg_user_mr = hns_roce_rereg_user_mr, +}; + +static const struct ib_device_ops hns_roce_dev_mw_ops = { + .alloc_mw = hns_roce_alloc_mw, + .dealloc_mw = hns_roce_dealloc_mw, +}; + +static const struct ib_device_ops hns_roce_dev_frmr_ops = { + .alloc_mr = hns_roce_alloc_mr, + .map_mr_sg = hns_roce_map_mr_sg, +}; + +static const struct ib_device_ops hns_roce_dev_srq_ops = { + .create_srq = hns_roce_create_srq, + .destroy_srq = hns_roce_destroy_srq, +}; + static int hns_roce_register_device(struct hns_roce_dev *hr_dev) { int ret; @@ -484,88 +532,38 @@ static int hns_roce_register_device(struct hns_roce_dev *hr_dev) ib_dev->uverbs_ex_cmd_mask |= (1ULL << IB_USER_VERBS_EX_CMD_MODIFY_CQ); - /* HCA||device||port */ - ib_dev->modify_device = hns_roce_modify_device; - ib_dev->query_device = hns_roce_query_device; - ib_dev->query_port = hns_roce_query_port; - ib_dev->modify_port = hns_roce_modify_port; - ib_dev->get_link_layer = hns_roce_get_link_layer; - ib_dev->get_netdev = hns_roce_get_netdev; - ib_dev->add_gid = hns_roce_add_gid; - ib_dev->del_gid = hns_roce_del_gid; - ib_dev->query_pkey = hns_roce_query_pkey; - ib_dev->alloc_ucontext = hns_roce_alloc_ucontext; - ib_dev->dealloc_ucontext = hns_roce_dealloc_ucontext; - ib_dev->mmap = hns_roce_mmap; - - /* PD */ - ib_dev->alloc_pd = hns_roce_alloc_pd; - ib_dev->dealloc_pd = hns_roce_dealloc_pd; - - /* AH */ - ib_dev->create_ah = hns_roce_create_ah; - ib_dev->query_ah = hns_roce_query_ah; - ib_dev->destroy_ah = hns_roce_destroy_ah; - - /* QP */ - ib_dev->create_qp = hns_roce_create_qp; - ib_dev->modify_qp = hns_roce_modify_qp; - ib_dev->query_qp = hr_dev->hw->query_qp; - ib_dev->destroy_qp = hr_dev->hw->destroy_qp; - ib_dev->post_send = hr_dev->hw->post_send; - ib_dev->post_recv = hr_dev->hw->post_recv; - - /* CQ */ - ib_dev->create_cq = hns_roce_ib_create_cq; - ib_dev->modify_cq = hr_dev->hw->modify_cq; - ib_dev->destroy_cq = hns_roce_ib_destroy_cq; - ib_dev->req_notify_cq = hr_dev->hw->req_notify_cq; - ib_dev->poll_cq = hr_dev->hw->poll_cq; - - /* MR */ - ib_dev->get_dma_mr = hns_roce_get_dma_mr; - ib_dev->reg_user_mr = hns_roce_reg_user_mr; - ib_dev->dereg_mr = hns_roce_dereg_mr; if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_REREG_MR) { - ib_dev->rereg_user_mr = hns_roce_rereg_user_mr; ib_dev->uverbs_cmd_mask |= (1ULL << IB_USER_VERBS_CMD_REREG_MR); + ib_set_device_ops(ib_dev, &hns_roce_dev_mr_ops); } /* MW */ if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_MW) { - ib_dev->alloc_mw = hns_roce_alloc_mw; - ib_dev->dealloc_mw = hns_roce_dealloc_mw; ib_dev->uverbs_cmd_mask |= (1ULL << IB_USER_VERBS_CMD_ALLOC_MW) | (1ULL << IB_USER_VERBS_CMD_DEALLOC_MW); + ib_set_device_ops(ib_dev, &hns_roce_dev_mw_ops); } /* FRMR */ - if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_FRMR) { - ib_dev->alloc_mr = hns_roce_alloc_mr; - ib_dev->map_mr_sg = hns_roce_map_mr_sg; - } + if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_FRMR) + ib_set_device_ops(ib_dev, &hns_roce_dev_frmr_ops); /* SRQ */ if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ) { - ib_dev->create_srq = hns_roce_create_srq; - ib_dev->modify_srq = hr_dev->hw->modify_srq; - ib_dev->query_srq = hr_dev->hw->query_srq; - ib_dev->destroy_srq = hns_roce_destroy_srq; - ib_dev->post_srq_recv = hr_dev->hw->post_srq_recv; ib_dev->uverbs_cmd_mask |= (1ULL << IB_USER_VERBS_CMD_CREATE_SRQ) | (1ULL << IB_USER_VERBS_CMD_MODIFY_SRQ) | (1ULL << IB_USER_VERBS_CMD_QUERY_SRQ) | (1ULL << IB_USER_VERBS_CMD_DESTROY_SRQ) | (1ULL << IB_USER_VERBS_CMD_POST_SRQ_RECV); + ib_set_device_ops(ib_dev, &hns_roce_dev_srq_ops); + ib_set_device_ops(ib_dev, hr_dev->hw->hns_roce_dev_srq_ops); } - /* OTHERS */ - ib_dev->get_port_immutable = hns_roce_port_immutable; - ib_dev->disassociate_ucontext = hns_roce_disassociate_ucontext; - ib_dev->driver_id = RDMA_DRIVER_HNS; + ib_set_device_ops(ib_dev, hr_dev->hw->hns_roce_dev_ops); + ib_set_device_ops(ib_dev, &hns_roce_dev_ops); ret = ib_register_device(ib_dev, "hns_%d", NULL); if (ret) { dev_err(dev, "ib_register_device failed!\n"); From patchwork Mon Dec 10 19:09:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722265 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 598993E9D for ; Mon, 10 Dec 2018 19:10:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 496002A87C for ; Mon, 10 Dec 2018 19:10:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3E5E82A779; Mon, 10 Dec 2018 19:10:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BF4EE2A954 for ; Mon, 10 Dec 2018 19:10:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729247AbeLJTKb (ORCPT ); Mon, 10 Dec 2018 14:10:31 -0500 Received: from mail-wr1-f68.google.com ([209.85.221.68]:33603 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728734AbeLJTKb (ORCPT ); Mon, 10 Dec 2018 14:10:31 -0500 Received: by mail-wr1-f68.google.com with SMTP id c14so11692894wrr.0 for ; Mon, 10 Dec 2018 11:10:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zadQaJ3pBwpmFZF+ljIRpmDFXgDnb/gWUshKVHaZPF8=; b=jdVQqUAKJa7iI1QCC+zjQ6y5hXXq5vER/wgAtj9yXk25mTNrfhR9WotLYZH7O8mAIR xBqkOkS3aYHfkAa6PbIAumlBf4YUASQ7YvV/BVAN/qFmPv8/GdIF0nJLLWwNmPN+ITZ9 m4l4uPRaY8mo4eizNK2LMAcPPnlnwDozSUfW15niyWSsZ8Ubh//Cb6VcFzWGK87irPwa 0B5qeyyo93MCg5ZB2hcA+xREZstJ1SncPlf/wSnQUf4ninJn2gxx4QCCpPJ950KjFHM2 nROl1uL1KCgr/OCMc9kIt4lhWu+WjMmqcfAeBHScj/2RWtAd9f48tyu22uO+QkOwIEpj UhBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zadQaJ3pBwpmFZF+ljIRpmDFXgDnb/gWUshKVHaZPF8=; b=FiXdKcB7DtLeaLYQQsr9eHR0eNUelhRjSZZQJXhuO+lvp+s8zBO/m5dwf7VQWNM4ve DIbeJqMqv2ErwvvJO7IYWgnNOSylPWWhVY7sA2MC0OYcG5ZMplHQBlKwCuLqb1mAabql mNcaLJY6IuQ7z6ibJt29kvIadoGCQL2S9ejk+Zqpn8CrcqojUkIw3wUDaxwJOQiIyA7q k3CFSn23HSWADt0BK3SygJpRifgh5ipp9nVYcXU55AVMVZyXv0LAic04vn28nbCSJX46 6wPVgnTt3qm+QFRT4MMxIvzxSS/zfJIEfOybD3+TQi1qmVnJV7w0A0JpwR2QGNrRM4u9 WaFg== X-Gm-Message-State: AA+aEWaINw4VWVm2D63Kji4xWKGGJeVeejXH9FtCJnBSBuJ/iqLPWCj7 SpZyRwQbaPlSYibSBmtGIxs= X-Google-Smtp-Source: AFSGD/X85aLkcYZGQSH01BCJWx/3lsEx3GCpgJbdhKfr8NLl+URY+Y9XZhlbFwO7wDPFoFF5SU8Rfg== X-Received: by 2002:adf:ce02:: with SMTP id p2mr11914146wrn.185.1544469028789; Mon, 10 Dec 2018 11:10:28 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.27 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:28 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v5 08/20] RDMA/i40iw: Initialize ib_device_ops struct Date: Mon, 10 Dec 2018 21:09:36 +0200 Message-Id: <20181210190948.6892-9-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/i40iw/i40iw_verbs.c | 63 ++++++++++++----------- 1 file changed, 33 insertions(+), 30 deletions(-) diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c index a773d1edf7fd..9e42ac2db3ca 100644 --- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c +++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c @@ -2721,6 +2721,39 @@ static int i40iw_query_pkey(struct ib_device *ibdev, return 0; } +static const struct ib_device_ops i40iw_dev_ops = { + .alloc_hw_stats = i40iw_alloc_hw_stats, + .alloc_mr = i40iw_alloc_mr, + .alloc_pd = i40iw_alloc_pd, + .alloc_ucontext = i40iw_alloc_ucontext, + .create_cq = i40iw_create_cq, + .create_qp = i40iw_create_qp, + .dealloc_pd = i40iw_dealloc_pd, + .dealloc_ucontext = i40iw_dealloc_ucontext, + .dereg_mr = i40iw_dereg_mr, + .destroy_cq = i40iw_destroy_cq, + .destroy_qp = i40iw_destroy_qp, + .drain_rq = i40iw_drain_rq, + .drain_sq = i40iw_drain_sq, + .get_dev_fw_str = i40iw_get_dev_fw_str, + .get_dma_mr = i40iw_get_dma_mr, + .get_hw_stats = i40iw_get_hw_stats, + .get_port_immutable = i40iw_port_immutable, + .map_mr_sg = i40iw_map_mr_sg, + .mmap = i40iw_mmap, + .modify_qp = i40iw_modify_qp, + .poll_cq = i40iw_poll_cq, + .post_recv = i40iw_post_recv, + .post_send = i40iw_post_send, + .query_device = i40iw_query_device, + .query_gid = i40iw_query_gid, + .query_pkey = i40iw_query_pkey, + .query_port = i40iw_query_port, + .query_qp = i40iw_query_qp, + .reg_user_mr = i40iw_reg_user_mr, + .req_notify_cq = i40iw_req_notify_cq, +}; + /** * i40iw_init_rdma_device - initialization of iwarp device * @iwdev: iwarp device @@ -2767,30 +2800,6 @@ static struct i40iw_ib_device *i40iw_init_rdma_device(struct i40iw_device *iwdev iwibdev->ibdev.phys_port_cnt = 1; iwibdev->ibdev.num_comp_vectors = iwdev->ceqs_count; iwibdev->ibdev.dev.parent = &pcidev->dev; - iwibdev->ibdev.query_port = i40iw_query_port; - iwibdev->ibdev.query_pkey = i40iw_query_pkey; - iwibdev->ibdev.query_gid = i40iw_query_gid; - iwibdev->ibdev.alloc_ucontext = i40iw_alloc_ucontext; - iwibdev->ibdev.dealloc_ucontext = i40iw_dealloc_ucontext; - iwibdev->ibdev.mmap = i40iw_mmap; - iwibdev->ibdev.alloc_pd = i40iw_alloc_pd; - iwibdev->ibdev.dealloc_pd = i40iw_dealloc_pd; - iwibdev->ibdev.create_qp = i40iw_create_qp; - iwibdev->ibdev.modify_qp = i40iw_modify_qp; - iwibdev->ibdev.query_qp = i40iw_query_qp; - iwibdev->ibdev.destroy_qp = i40iw_destroy_qp; - iwibdev->ibdev.create_cq = i40iw_create_cq; - iwibdev->ibdev.destroy_cq = i40iw_destroy_cq; - iwibdev->ibdev.get_dma_mr = i40iw_get_dma_mr; - iwibdev->ibdev.reg_user_mr = i40iw_reg_user_mr; - iwibdev->ibdev.dereg_mr = i40iw_dereg_mr; - iwibdev->ibdev.alloc_hw_stats = i40iw_alloc_hw_stats; - iwibdev->ibdev.get_hw_stats = i40iw_get_hw_stats; - iwibdev->ibdev.query_device = i40iw_query_device; - iwibdev->ibdev.drain_sq = i40iw_drain_sq; - iwibdev->ibdev.drain_rq = i40iw_drain_rq; - iwibdev->ibdev.alloc_mr = i40iw_alloc_mr; - iwibdev->ibdev.map_mr_sg = i40iw_map_mr_sg; iwibdev->ibdev.iwcm = kzalloc(sizeof(*iwibdev->ibdev.iwcm), GFP_KERNEL); if (!iwibdev->ibdev.iwcm) { ib_dealloc_device(&iwibdev->ibdev); @@ -2807,12 +2816,6 @@ static struct i40iw_ib_device *i40iw_init_rdma_device(struct i40iw_device *iwdev iwibdev->ibdev.iwcm->destroy_listen = i40iw_destroy_listen; memcpy(iwibdev->ibdev.iwcm->ifname, netdev->name, sizeof(iwibdev->ibdev.iwcm->ifname)); - iwibdev->ibdev.get_port_immutable = i40iw_port_immutable; - iwibdev->ibdev.get_dev_fw_str = i40iw_get_dev_fw_str; - iwibdev->ibdev.poll_cq = i40iw_poll_cq; - iwibdev->ibdev.req_notify_cq = i40iw_req_notify_cq; - iwibdev->ibdev.post_send = i40iw_post_send; - iwibdev->ibdev.post_recv = i40iw_post_recv; return iwibdev; } From patchwork Mon Dec 10 19:09:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722267 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0109118A7 for ; Mon, 10 Dec 2018 19:10:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E35842A6B1 for ; Mon, 10 Dec 2018 19:10:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D82652A84D; Mon, 10 Dec 2018 19:10:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 20F5D2A7BC for ; Mon, 10 Dec 2018 19:10:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728828AbeLJTKd (ORCPT ); Mon, 10 Dec 2018 14:10:33 -0500 Received: from mail-wr1-f65.google.com ([209.85.221.65]:41522 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727663AbeLJTKc (ORCPT ); Mon, 10 Dec 2018 14:10:32 -0500 Received: by mail-wr1-f65.google.com with SMTP id x10so11630110wrs.8 for ; Mon, 10 Dec 2018 11:10:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=y5ydL3S91MMyoEGcLIcEKfc+S5wgUWwR1MIAs9i3A2o=; b=HIXNgk8IO5W6CK/VvjWiNJHm0MEQZlvnFtKdGsg5ndYGNrrj2mSXKlEiO0l1X4zbtW QCMLAbBTjt2b1srQ/tE96ghzuLqY6Tss9aGlylxPDbnMaxm+dbF9QEJNTisMCnh5Z6Fw 020MqK/nFt8US8fo9Y1vfaek6grpOrO5T7jfcUkzqhXmkKwR0XNaFOYOJsrssiUc1L3X +m7upLt1h2ktbDEhoBCKT6Yq7wHjbtM1AOF2kSN7eCGbyC/xBFDqLO3xDDGgX6APfM+l IBsXj8jIhpA5/tiO8BfY4XiqvpZB2ahgKVxx758vgRRSqA5MkT18UM+4ht2rYCiNcLbV E3kQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=y5ydL3S91MMyoEGcLIcEKfc+S5wgUWwR1MIAs9i3A2o=; b=LpG35kAS4XsSLGTmWkwtpjngRnjhRQoZWOYYg5cwI77ofuf0dsi61Xk68bO0oPFvm7 rEDVknn1L6m+ATHlWcz4NGvDBUGSsWZNEnOO99e2Ih57NAdjAM2YKzS/0QCuFd8yZe+z V8exoDUn7w0w1mgyOiiiViEO2uUKJyaShTqcq4EVgM6ZoY29p9m7++Mz44+ld7IHEdyR NpB2i3jfuHM0NGqH0bt017++V9wOzMFfSU4Rt1WauoVJZLNmFbyd7rTXCB2xqolyPjk2 v644TrTQCVIaiwqNuTY6bcKow+VCT6eDZCfSeDqCBQShRtR4M1/CDViMKGWuoZSmdr3G ALkg== X-Gm-Message-State: AA+aEWbGYoC8cRz6AAEnO07r2hZZX3V8UhPc8tcae3b1PwGEu61qIIpT Zh6G1OG6CDfh7yG5/F3OhcU= X-Google-Smtp-Source: AFSGD/WBVChsLSmkHhDphwBdAn/trs5C7gkFtXVxC6hkG9b1nlRfS8VC0+bx5rN0ZbhxKZ31x2KEpg== X-Received: by 2002:adf:f504:: with SMTP id q4mr11613178wro.321.1544469030286; Mon, 10 Dec 2018 11:10:30 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.28 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:29 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v5 09/20] RDMA/mlx4: Initialize ib_device_ops struct Date: Mon, 10 Dec 2018 21:09:37 +0200 Message-Id: <20181210190948.6892-10-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/mlx4/main.c | 178 +++++++++++++++++------------- 1 file changed, 99 insertions(+), 79 deletions(-) diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c index b73b5fa1822a..1f15ec3e2b83 100644 --- a/drivers/infiniband/hw/mlx4/main.c +++ b/drivers/infiniband/hw/mlx4/main.c @@ -2220,6 +2220,11 @@ static void mlx4_ib_fill_diag_counters(struct mlx4_ib_dev *ibdev, } } +static const struct ib_device_ops mlx4_ib_hw_stats_ops = { + .alloc_hw_stats = mlx4_ib_alloc_hw_stats, + .get_hw_stats = mlx4_ib_get_hw_stats, +}; + static int mlx4_ib_alloc_diag_counters(struct mlx4_ib_dev *ibdev) { struct mlx4_ib_diag_counters *diag = ibdev->diag_counters; @@ -2246,8 +2251,7 @@ static int mlx4_ib_alloc_diag_counters(struct mlx4_ib_dev *ibdev) diag[i].offset, i); } - ibdev->ib_dev.get_hw_stats = mlx4_ib_get_hw_stats; - ibdev->ib_dev.alloc_hw_stats = mlx4_ib_alloc_hw_stats; + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_hw_stats_ops); return 0; @@ -2525,6 +2529,88 @@ static void get_fw_ver_str(struct ib_device *device, char *str) (int) dev->dev->caps.fw_ver & 0xffff); } +static const struct ib_device_ops mlx4_ib_dev_ops = { + .add_gid = mlx4_ib_add_gid, + .alloc_mr = mlx4_ib_alloc_mr, + .alloc_pd = mlx4_ib_alloc_pd, + .alloc_ucontext = mlx4_ib_alloc_ucontext, + .attach_mcast = mlx4_ib_mcg_attach, + .create_ah = mlx4_ib_create_ah, + .create_cq = mlx4_ib_create_cq, + .create_qp = mlx4_ib_create_qp, + .create_srq = mlx4_ib_create_srq, + .dealloc_pd = mlx4_ib_dealloc_pd, + .dealloc_ucontext = mlx4_ib_dealloc_ucontext, + .del_gid = mlx4_ib_del_gid, + .dereg_mr = mlx4_ib_dereg_mr, + .destroy_ah = mlx4_ib_destroy_ah, + .destroy_cq = mlx4_ib_destroy_cq, + .destroy_qp = mlx4_ib_destroy_qp, + .destroy_srq = mlx4_ib_destroy_srq, + .detach_mcast = mlx4_ib_mcg_detach, + .disassociate_ucontext = mlx4_ib_disassociate_ucontext, + .drain_rq = mlx4_ib_drain_rq, + .drain_sq = mlx4_ib_drain_sq, + .get_dev_fw_str = get_fw_ver_str, + .get_dma_mr = mlx4_ib_get_dma_mr, + .get_link_layer = mlx4_ib_port_link_layer, + .get_netdev = mlx4_ib_get_netdev, + .get_port_immutable = mlx4_port_immutable, + .map_mr_sg = mlx4_ib_map_mr_sg, + .mmap = mlx4_ib_mmap, + .modify_cq = mlx4_ib_modify_cq, + .modify_device = mlx4_ib_modify_device, + .modify_port = mlx4_ib_modify_port, + .modify_qp = mlx4_ib_modify_qp, + .modify_srq = mlx4_ib_modify_srq, + .poll_cq = mlx4_ib_poll_cq, + .post_recv = mlx4_ib_post_recv, + .post_send = mlx4_ib_post_send, + .post_srq_recv = mlx4_ib_post_srq_recv, + .process_mad = mlx4_ib_process_mad, + .query_ah = mlx4_ib_query_ah, + .query_device = mlx4_ib_query_device, + .query_gid = mlx4_ib_query_gid, + .query_pkey = mlx4_ib_query_pkey, + .query_port = mlx4_ib_query_port, + .query_qp = mlx4_ib_query_qp, + .query_srq = mlx4_ib_query_srq, + .reg_user_mr = mlx4_ib_reg_user_mr, + .req_notify_cq = mlx4_ib_arm_cq, + .rereg_user_mr = mlx4_ib_rereg_user_mr, + .resize_cq = mlx4_ib_resize_cq, +}; + +static const struct ib_device_ops mlx4_ib_dev_wq_ops = { + .create_rwq_ind_table = mlx4_ib_create_rwq_ind_table, + .create_wq = mlx4_ib_create_wq, + .destroy_rwq_ind_table = mlx4_ib_destroy_rwq_ind_table, + .destroy_wq = mlx4_ib_destroy_wq, + .modify_wq = mlx4_ib_modify_wq, +}; + +static const struct ib_device_ops mlx4_ib_dev_fmr_ops = { + .alloc_fmr = mlx4_ib_fmr_alloc, + .dealloc_fmr = mlx4_ib_fmr_dealloc, + .map_phys_fmr = mlx4_ib_map_phys_fmr, + .unmap_fmr = mlx4_ib_unmap_fmr, +}; + +static const struct ib_device_ops mlx4_ib_dev_mw_ops = { + .alloc_mw = mlx4_ib_alloc_mw, + .dealloc_mw = mlx4_ib_dealloc_mw, +}; + +static const struct ib_device_ops mlx4_ib_dev_xrc_ops = { + .alloc_xrcd = mlx4_ib_alloc_xrcd, + .dealloc_xrcd = mlx4_ib_dealloc_xrcd, +}; + +static const struct ib_device_ops mlx4_ib_dev_fs_ops = { + .create_flow = mlx4_ib_create_flow, + .destroy_flow = mlx4_ib_destroy_flow, +}; + static void *mlx4_ib_add(struct mlx4_dev *dev) { struct mlx4_ib_dev *ibdev; @@ -2580,9 +2666,6 @@ static void *mlx4_ib_add(struct mlx4_dev *dev) 1 : ibdev->num_ports; ibdev->ib_dev.num_comp_vectors = dev->caps.num_comp_vectors; ibdev->ib_dev.dev.parent = &dev->persist->pdev->dev; - ibdev->ib_dev.get_netdev = mlx4_ib_get_netdev; - ibdev->ib_dev.add_gid = mlx4_ib_add_gid; - ibdev->ib_dev.del_gid = mlx4_ib_del_gid; if (dev->caps.userspace_caps) ibdev->ib_dev.uverbs_abi_ver = MLX4_IB_UVERBS_ABI_VERSION; @@ -2615,116 +2698,53 @@ static void *mlx4_ib_add(struct mlx4_dev *dev) (1ull << IB_USER_VERBS_CMD_CREATE_XSRQ) | (1ull << IB_USER_VERBS_CMD_OPEN_QP); - ibdev->ib_dev.query_device = mlx4_ib_query_device; - ibdev->ib_dev.query_port = mlx4_ib_query_port; - ibdev->ib_dev.get_link_layer = mlx4_ib_port_link_layer; - ibdev->ib_dev.query_gid = mlx4_ib_query_gid; - ibdev->ib_dev.query_pkey = mlx4_ib_query_pkey; - ibdev->ib_dev.modify_device = mlx4_ib_modify_device; - ibdev->ib_dev.modify_port = mlx4_ib_modify_port; - ibdev->ib_dev.alloc_ucontext = mlx4_ib_alloc_ucontext; - ibdev->ib_dev.dealloc_ucontext = mlx4_ib_dealloc_ucontext; - ibdev->ib_dev.mmap = mlx4_ib_mmap; - ibdev->ib_dev.alloc_pd = mlx4_ib_alloc_pd; - ibdev->ib_dev.dealloc_pd = mlx4_ib_dealloc_pd; - ibdev->ib_dev.create_ah = mlx4_ib_create_ah; - ibdev->ib_dev.query_ah = mlx4_ib_query_ah; - ibdev->ib_dev.destroy_ah = mlx4_ib_destroy_ah; - ibdev->ib_dev.create_srq = mlx4_ib_create_srq; - ibdev->ib_dev.modify_srq = mlx4_ib_modify_srq; - ibdev->ib_dev.query_srq = mlx4_ib_query_srq; - ibdev->ib_dev.destroy_srq = mlx4_ib_destroy_srq; - ibdev->ib_dev.post_srq_recv = mlx4_ib_post_srq_recv; - ibdev->ib_dev.create_qp = mlx4_ib_create_qp; - ibdev->ib_dev.modify_qp = mlx4_ib_modify_qp; - ibdev->ib_dev.query_qp = mlx4_ib_query_qp; - ibdev->ib_dev.destroy_qp = mlx4_ib_destroy_qp; - ibdev->ib_dev.drain_sq = mlx4_ib_drain_sq; - ibdev->ib_dev.drain_rq = mlx4_ib_drain_rq; - ibdev->ib_dev.post_send = mlx4_ib_post_send; - ibdev->ib_dev.post_recv = mlx4_ib_post_recv; - ibdev->ib_dev.create_cq = mlx4_ib_create_cq; - ibdev->ib_dev.modify_cq = mlx4_ib_modify_cq; - ibdev->ib_dev.resize_cq = mlx4_ib_resize_cq; - ibdev->ib_dev.destroy_cq = mlx4_ib_destroy_cq; - ibdev->ib_dev.poll_cq = mlx4_ib_poll_cq; - ibdev->ib_dev.req_notify_cq = mlx4_ib_arm_cq; - ibdev->ib_dev.get_dma_mr = mlx4_ib_get_dma_mr; - ibdev->ib_dev.reg_user_mr = mlx4_ib_reg_user_mr; - ibdev->ib_dev.rereg_user_mr = mlx4_ib_rereg_user_mr; - ibdev->ib_dev.dereg_mr = mlx4_ib_dereg_mr; - ibdev->ib_dev.alloc_mr = mlx4_ib_alloc_mr; - ibdev->ib_dev.map_mr_sg = mlx4_ib_map_mr_sg; - ibdev->ib_dev.attach_mcast = mlx4_ib_mcg_attach; - ibdev->ib_dev.detach_mcast = mlx4_ib_mcg_detach; - ibdev->ib_dev.process_mad = mlx4_ib_process_mad; - ibdev->ib_dev.get_port_immutable = mlx4_port_immutable; - ibdev->ib_dev.get_dev_fw_str = get_fw_ver_str; - ibdev->ib_dev.disassociate_ucontext = mlx4_ib_disassociate_ucontext; - + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_ops); ibdev->ib_dev.uverbs_ex_cmd_mask |= - (1ull << IB_USER_VERBS_EX_CMD_MODIFY_CQ); + (1ull << IB_USER_VERBS_EX_CMD_MODIFY_CQ) | + (1ull << IB_USER_VERBS_EX_CMD_QUERY_DEVICE) | + (1ull << IB_USER_VERBS_EX_CMD_CREATE_CQ) | + (1ull << IB_USER_VERBS_EX_CMD_CREATE_QP); if ((dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS) && ((mlx4_ib_port_link_layer(&ibdev->ib_dev, 1) == IB_LINK_LAYER_ETHERNET) || (mlx4_ib_port_link_layer(&ibdev->ib_dev, 2) == IB_LINK_LAYER_ETHERNET))) { - ibdev->ib_dev.create_wq = mlx4_ib_create_wq; - ibdev->ib_dev.modify_wq = mlx4_ib_modify_wq; - ibdev->ib_dev.destroy_wq = mlx4_ib_destroy_wq; - ibdev->ib_dev.create_rwq_ind_table = - mlx4_ib_create_rwq_ind_table; - ibdev->ib_dev.destroy_rwq_ind_table = - mlx4_ib_destroy_rwq_ind_table; ibdev->ib_dev.uverbs_ex_cmd_mask |= (1ull << IB_USER_VERBS_EX_CMD_CREATE_WQ) | (1ull << IB_USER_VERBS_EX_CMD_MODIFY_WQ) | (1ull << IB_USER_VERBS_EX_CMD_DESTROY_WQ) | (1ull << IB_USER_VERBS_EX_CMD_CREATE_RWQ_IND_TBL) | (1ull << IB_USER_VERBS_EX_CMD_DESTROY_RWQ_IND_TBL); + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_wq_ops); } - if (!mlx4_is_slave(ibdev->dev)) { - ibdev->ib_dev.alloc_fmr = mlx4_ib_fmr_alloc; - ibdev->ib_dev.map_phys_fmr = mlx4_ib_map_phys_fmr; - ibdev->ib_dev.unmap_fmr = mlx4_ib_unmap_fmr; - ibdev->ib_dev.dealloc_fmr = mlx4_ib_fmr_dealloc; - } + if (!mlx4_is_slave(ibdev->dev)) + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_fmr_ops); if (dev->caps.flags & MLX4_DEV_CAP_FLAG_MEM_WINDOW || dev->caps.bmme_flags & MLX4_BMME_FLAG_TYPE_2_WIN) { - ibdev->ib_dev.alloc_mw = mlx4_ib_alloc_mw; - ibdev->ib_dev.dealloc_mw = mlx4_ib_dealloc_mw; - ibdev->ib_dev.uverbs_cmd_mask |= (1ull << IB_USER_VERBS_CMD_ALLOC_MW) | (1ull << IB_USER_VERBS_CMD_DEALLOC_MW); + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_mw_ops); } if (dev->caps.flags & MLX4_DEV_CAP_FLAG_XRC) { - ibdev->ib_dev.alloc_xrcd = mlx4_ib_alloc_xrcd; - ibdev->ib_dev.dealloc_xrcd = mlx4_ib_dealloc_xrcd; ibdev->ib_dev.uverbs_cmd_mask |= (1ull << IB_USER_VERBS_CMD_OPEN_XRCD) | (1ull << IB_USER_VERBS_CMD_CLOSE_XRCD); + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_xrc_ops); } if (check_flow_steering_support(dev)) { ibdev->steering_support = MLX4_STEERING_MODE_DEVICE_MANAGED; - ibdev->ib_dev.create_flow = mlx4_ib_create_flow; - ibdev->ib_dev.destroy_flow = mlx4_ib_destroy_flow; - ibdev->ib_dev.uverbs_ex_cmd_mask |= (1ull << IB_USER_VERBS_EX_CMD_CREATE_FLOW) | (1ull << IB_USER_VERBS_EX_CMD_DESTROY_FLOW); + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_fs_ops); } - ibdev->ib_dev.uverbs_ex_cmd_mask |= - (1ull << IB_USER_VERBS_EX_CMD_QUERY_DEVICE) | - (1ull << IB_USER_VERBS_EX_CMD_CREATE_CQ) | - (1ull << IB_USER_VERBS_EX_CMD_CREATE_QP); - mlx4_ib_alloc_eqs(dev, ibdev); spin_lock_init(&iboe->lock); From patchwork Mon Dec 10 19:09:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722269 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EFD0318E8 for ; Mon, 10 Dec 2018 19:10:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E0DBF2A795 for ; Mon, 10 Dec 2018 19:10:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D56462A72F; Mon, 10 Dec 2018 19:10:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 11B5E2A88D for ; Mon, 10 Dec 2018 19:10:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729245AbeLJTKe (ORCPT ); Mon, 10 Dec 2018 14:10:34 -0500 Received: from mail-wm1-f68.google.com ([209.85.128.68]:40725 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728853AbeLJTKe (ORCPT ); Mon, 10 Dec 2018 14:10:34 -0500 Received: by mail-wm1-f68.google.com with SMTP id q26so12420816wmf.5 for ; Mon, 10 Dec 2018 11:10:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=m+TkVuhVbZEfpz1Vqd0q8c8+fP3Z4IKqRo9y8wjEUaQ=; b=XIBi2E5K/4jbrIcbg7ZlAnIdAnJmXTQaRp2+uQIW38O/RFp+6tqJUYSrARBQS7rdYF /NGtiQwBvuyxvjNH9ubFHzZeBAF8KXZaxKjUFoxyE9j8r7dmxETnbHFLYeuyiIQFyEQG avM7KCUQHhBZ1SfOly0CHk07nUAG1PIbr5ub8fZbw7ovHM7q6RLtSaBtXSvOE48y2Ak7 vsHZ+e7rkMMTuyG64lid7eHEsaKu+Qt3oUylpNAFMx3zcG48hMJyfr48kDKQTXOKH5mq j5pe3x49byZ5ayj6wyw1K6ZqeRohI1qM6+M4rwUCqlW43fBRdXB+4JxyGbZ7IkhDVHuX gpPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=m+TkVuhVbZEfpz1Vqd0q8c8+fP3Z4IKqRo9y8wjEUaQ=; b=GeLZJBTGSE2jdCbSPXUg8Lx7yyNVyPYBGnoY3Hduyl1Bk4XH/WiS3VRoBtyglQf4DE G+fgrfSXVZNkO4dQXup6zKfUpO5pfPdv+2avRX30t3UhuxnB19foDw/RfsRd4nLEfvll NoMxDT4fmYvlCF++pa7mbn5E5kshJgnxAEFya5wfzb4GZWw9ogznyUtJpHKrVyUCHza9 3RcUgqb2kKO/85Gvq3jPBpYVZWkhQ8tmow4niGIUyAtT+zu/+lOBHzMAL6P/modgR6WK bOcmLKo6nf50JEMv0Q5CbVriMz1KdKc/pUDKYO6QjHTj2Toa22EZmwA9vQv3cA7hd4Ww aLLQ== X-Gm-Message-State: AA+aEWYdmFrgZF5lf1zG/0nNsFg7Op2yFU2mEgK68Yybbv6ahnFu4+HA s5h1PhHB3lzb7HDK5+H+Qtk= X-Google-Smtp-Source: AFSGD/UDteramq7DRJuATnl9jmUi9Jo1vM4GoomGVDG2MK8Jg0wtsG8hkpS70tydWmmgAk9YNJ0HnQ== X-Received: by 2002:a1c:58ce:: with SMTP id m197mr169216wmb.31.1544469031473; Mon, 10 Dec 2018 11:10:31 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.30 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:31 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v5 10/20] RDMA/mlx5: Initialize ib_device_ops struct Date: Mon, 10 Dec 2018 21:09:38 +0200 Message-Id: <20181210190948.6892-11-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/mlx5/main.c | 228 +++++++++++++++++------------- 1 file changed, 131 insertions(+), 97 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 2b09e6896e5a..27da1910123a 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -5811,6 +5811,94 @@ static void mlx5_ib_stage_flow_db_cleanup(struct mlx5_ib_dev *dev) kfree(dev->flow_db); } +static const struct ib_device_ops mlx5_ib_dev_ops = { + .add_gid = mlx5_ib_add_gid, + .alloc_mr = mlx5_ib_alloc_mr, + .alloc_pd = mlx5_ib_alloc_pd, + .alloc_ucontext = mlx5_ib_alloc_ucontext, + .attach_mcast = mlx5_ib_mcg_attach, + .check_mr_status = mlx5_ib_check_mr_status, + .create_ah = mlx5_ib_create_ah, + .create_counters = mlx5_ib_create_counters, + .create_cq = mlx5_ib_create_cq, + .create_flow = mlx5_ib_create_flow, + .create_qp = mlx5_ib_create_qp, + .create_srq = mlx5_ib_create_srq, + .dealloc_pd = mlx5_ib_dealloc_pd, + .dealloc_ucontext = mlx5_ib_dealloc_ucontext, + .del_gid = mlx5_ib_del_gid, + .dereg_mr = mlx5_ib_dereg_mr, + .destroy_ah = mlx5_ib_destroy_ah, + .destroy_counters = mlx5_ib_destroy_counters, + .destroy_cq = mlx5_ib_destroy_cq, + .destroy_flow = mlx5_ib_destroy_flow, + .destroy_flow_action = mlx5_ib_destroy_flow_action, + .destroy_qp = mlx5_ib_destroy_qp, + .destroy_srq = mlx5_ib_destroy_srq, + .detach_mcast = mlx5_ib_mcg_detach, + .disassociate_ucontext = mlx5_ib_disassociate_ucontext, + .drain_rq = mlx5_ib_drain_rq, + .drain_sq = mlx5_ib_drain_sq, + .get_dev_fw_str = get_dev_fw_str, + .get_dma_mr = mlx5_ib_get_dma_mr, + .get_link_layer = mlx5_ib_port_link_layer, + .map_mr_sg = mlx5_ib_map_mr_sg, + .mmap = mlx5_ib_mmap, + .modify_cq = mlx5_ib_modify_cq, + .modify_device = mlx5_ib_modify_device, + .modify_port = mlx5_ib_modify_port, + .modify_qp = mlx5_ib_modify_qp, + .modify_srq = mlx5_ib_modify_srq, + .poll_cq = mlx5_ib_poll_cq, + .post_recv = mlx5_ib_post_recv, + .post_send = mlx5_ib_post_send, + .post_srq_recv = mlx5_ib_post_srq_recv, + .process_mad = mlx5_ib_process_mad, + .query_ah = mlx5_ib_query_ah, + .query_device = mlx5_ib_query_device, + .query_gid = mlx5_ib_query_gid, + .query_pkey = mlx5_ib_query_pkey, + .query_qp = mlx5_ib_query_qp, + .query_srq = mlx5_ib_query_srq, + .read_counters = mlx5_ib_read_counters, + .reg_user_mr = mlx5_ib_reg_user_mr, + .req_notify_cq = mlx5_ib_arm_cq, + .rereg_user_mr = mlx5_ib_rereg_user_mr, + .resize_cq = mlx5_ib_resize_cq, +}; + +static const struct ib_device_ops mlx5_ib_dev_flow_ipsec_ops = { + .create_flow_action_esp = mlx5_ib_create_flow_action_esp, + .modify_flow_action_esp = mlx5_ib_modify_flow_action_esp, +}; + +static const struct ib_device_ops mlx5_ib_dev_ipoib_enhanced_ops = { + .rdma_netdev_get_params = mlx5_ib_rn_get_params, +}; + +static const struct ib_device_ops mlx5_ib_dev_sriov_ops = { + .get_vf_config = mlx5_ib_get_vf_config, + .get_vf_stats = mlx5_ib_get_vf_stats, + .set_vf_guid = mlx5_ib_set_vf_guid, + .set_vf_link_state = mlx5_ib_set_vf_link_state, +}; + +static const struct ib_device_ops mlx5_ib_dev_mw_ops = { + .alloc_mw = mlx5_ib_alloc_mw, + .dealloc_mw = mlx5_ib_dealloc_mw, +}; + +static const struct ib_device_ops mlx5_ib_dev_xrc_ops = { + .alloc_xrcd = mlx5_ib_alloc_xrcd, + .dealloc_xrcd = mlx5_ib_dealloc_xrcd, +}; + +static const struct ib_device_ops mlx5_ib_dev_dm_ops = { + .alloc_dm = mlx5_ib_alloc_dm, + .dealloc_dm = mlx5_ib_dealloc_dm, + .reg_dm_mr = mlx5_ib_reg_dm_mr, +}; + int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev) { struct mlx5_core_dev *mdev = dev->mdev; @@ -5849,108 +5937,41 @@ int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev) (1ull << IB_USER_VERBS_EX_CMD_CREATE_CQ) | (1ull << IB_USER_VERBS_EX_CMD_CREATE_QP) | (1ull << IB_USER_VERBS_EX_CMD_MODIFY_QP) | - (1ull << IB_USER_VERBS_EX_CMD_MODIFY_CQ); - - dev->ib_dev.query_device = mlx5_ib_query_device; - dev->ib_dev.get_link_layer = mlx5_ib_port_link_layer; - dev->ib_dev.query_gid = mlx5_ib_query_gid; - dev->ib_dev.add_gid = mlx5_ib_add_gid; - dev->ib_dev.del_gid = mlx5_ib_del_gid; - dev->ib_dev.query_pkey = mlx5_ib_query_pkey; - dev->ib_dev.modify_device = mlx5_ib_modify_device; - dev->ib_dev.modify_port = mlx5_ib_modify_port; - dev->ib_dev.alloc_ucontext = mlx5_ib_alloc_ucontext; - dev->ib_dev.dealloc_ucontext = mlx5_ib_dealloc_ucontext; - dev->ib_dev.mmap = mlx5_ib_mmap; - dev->ib_dev.alloc_pd = mlx5_ib_alloc_pd; - dev->ib_dev.dealloc_pd = mlx5_ib_dealloc_pd; - dev->ib_dev.create_ah = mlx5_ib_create_ah; - dev->ib_dev.query_ah = mlx5_ib_query_ah; - dev->ib_dev.destroy_ah = mlx5_ib_destroy_ah; - dev->ib_dev.create_srq = mlx5_ib_create_srq; - dev->ib_dev.modify_srq = mlx5_ib_modify_srq; - dev->ib_dev.query_srq = mlx5_ib_query_srq; - dev->ib_dev.destroy_srq = mlx5_ib_destroy_srq; - dev->ib_dev.post_srq_recv = mlx5_ib_post_srq_recv; - dev->ib_dev.create_qp = mlx5_ib_create_qp; - dev->ib_dev.modify_qp = mlx5_ib_modify_qp; - dev->ib_dev.query_qp = mlx5_ib_query_qp; - dev->ib_dev.destroy_qp = mlx5_ib_destroy_qp; - dev->ib_dev.drain_sq = mlx5_ib_drain_sq; - dev->ib_dev.drain_rq = mlx5_ib_drain_rq; - dev->ib_dev.post_send = mlx5_ib_post_send; - dev->ib_dev.post_recv = mlx5_ib_post_recv; - dev->ib_dev.create_cq = mlx5_ib_create_cq; - dev->ib_dev.modify_cq = mlx5_ib_modify_cq; - dev->ib_dev.resize_cq = mlx5_ib_resize_cq; - dev->ib_dev.destroy_cq = mlx5_ib_destroy_cq; - dev->ib_dev.poll_cq = mlx5_ib_poll_cq; - dev->ib_dev.req_notify_cq = mlx5_ib_arm_cq; - dev->ib_dev.get_dma_mr = mlx5_ib_get_dma_mr; - dev->ib_dev.reg_user_mr = mlx5_ib_reg_user_mr; - dev->ib_dev.rereg_user_mr = mlx5_ib_rereg_user_mr; - dev->ib_dev.dereg_mr = mlx5_ib_dereg_mr; - dev->ib_dev.attach_mcast = mlx5_ib_mcg_attach; - dev->ib_dev.detach_mcast = mlx5_ib_mcg_detach; - dev->ib_dev.process_mad = mlx5_ib_process_mad; - dev->ib_dev.alloc_mr = mlx5_ib_alloc_mr; - dev->ib_dev.map_mr_sg = mlx5_ib_map_mr_sg; - dev->ib_dev.check_mr_status = mlx5_ib_check_mr_status; - dev->ib_dev.get_dev_fw_str = get_dev_fw_str; - if (MLX5_CAP_GEN(mdev, ipoib_enhanced_offloads) && - IS_ENABLED(CONFIG_MLX5_CORE_IPOIB)) - dev->ib_dev.rdma_netdev_get_params = mlx5_ib_rn_get_params; + (1ull << IB_USER_VERBS_EX_CMD_MODIFY_CQ) | + (1ull << IB_USER_VERBS_EX_CMD_CREATE_FLOW) | + (1ull << IB_USER_VERBS_EX_CMD_DESTROY_FLOW); - if (mlx5_core_is_pf(mdev)) { - dev->ib_dev.get_vf_config = mlx5_ib_get_vf_config; - dev->ib_dev.set_vf_link_state = mlx5_ib_set_vf_link_state; - dev->ib_dev.get_vf_stats = mlx5_ib_get_vf_stats; - dev->ib_dev.set_vf_guid = mlx5_ib_set_vf_guid; - } + if (MLX5_CAP_GEN(mdev, ipoib_enhanced_offloads)) + ib_set_device_ops(&dev->ib_dev, + &mlx5_ib_dev_ipoib_enhanced_ops); - dev->ib_dev.disassociate_ucontext = mlx5_ib_disassociate_ucontext; + if (mlx5_core_is_pf(mdev)) + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_sriov_ops); dev->umr_fence = mlx5_get_umr_fence(MLX5_CAP_GEN(mdev, umr_fence)); if (MLX5_CAP_GEN(mdev, imaicl)) { - dev->ib_dev.alloc_mw = mlx5_ib_alloc_mw; - dev->ib_dev.dealloc_mw = mlx5_ib_dealloc_mw; dev->ib_dev.uverbs_cmd_mask |= (1ull << IB_USER_VERBS_CMD_ALLOC_MW) | (1ull << IB_USER_VERBS_CMD_DEALLOC_MW); + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_mw_ops); } if (MLX5_CAP_GEN(mdev, xrc)) { - dev->ib_dev.alloc_xrcd = mlx5_ib_alloc_xrcd; - dev->ib_dev.dealloc_xrcd = mlx5_ib_dealloc_xrcd; dev->ib_dev.uverbs_cmd_mask |= (1ull << IB_USER_VERBS_CMD_OPEN_XRCD) | (1ull << IB_USER_VERBS_CMD_CLOSE_XRCD); + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_xrc_ops); } - if (MLX5_CAP_DEV_MEM(mdev, memic)) { - dev->ib_dev.alloc_dm = mlx5_ib_alloc_dm; - dev->ib_dev.dealloc_dm = mlx5_ib_dealloc_dm; - dev->ib_dev.reg_dm_mr = mlx5_ib_reg_dm_mr; - } + if (MLX5_CAP_DEV_MEM(mdev, memic)) + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_dm_ops); - dev->ib_dev.create_flow = mlx5_ib_create_flow; - dev->ib_dev.destroy_flow = mlx5_ib_destroy_flow; - dev->ib_dev.uverbs_ex_cmd_mask |= - (1ull << IB_USER_VERBS_EX_CMD_CREATE_FLOW) | - (1ull << IB_USER_VERBS_EX_CMD_DESTROY_FLOW); if (mlx5_accel_ipsec_device_caps(dev->mdev) & - MLX5_ACCEL_IPSEC_CAP_DEVICE) { - dev->ib_dev.create_flow_action_esp = - mlx5_ib_create_flow_action_esp; - dev->ib_dev.modify_flow_action_esp = - mlx5_ib_modify_flow_action_esp; - } - dev->ib_dev.destroy_flow_action = mlx5_ib_destroy_flow_action; + MLX5_ACCEL_IPSEC_CAP_DEVICE) + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_flow_ipsec_ops); dev->ib_dev.driver_id = RDMA_DRIVER_MLX5; - dev->ib_dev.create_counters = mlx5_ib_create_counters; - dev->ib_dev.destroy_counters = mlx5_ib_destroy_counters; - dev->ib_dev.read_counters = mlx5_ib_read_counters; + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_ops); if (IS_ENABLED(CONFIG_INFINIBAND_USER_ACCESS)) dev->ib_dev.driver_def = mlx5_ib_defs; @@ -5967,22 +5988,37 @@ int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev) return 0; } +static const struct ib_device_ops mlx5_ib_dev_port_ops = { + .get_port_immutable = mlx5_port_immutable, + .query_port = mlx5_ib_query_port, +}; + static int mlx5_ib_stage_non_default_cb(struct mlx5_ib_dev *dev) { - dev->ib_dev.get_port_immutable = mlx5_port_immutable; - dev->ib_dev.query_port = mlx5_ib_query_port; - + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_port_ops); return 0; } +static const struct ib_device_ops mlx5_ib_dev_port_rep_ops = { + .get_port_immutable = mlx5_port_rep_immutable, + .query_port = mlx5_ib_rep_query_port, +}; + int mlx5_ib_stage_rep_non_default_cb(struct mlx5_ib_dev *dev) { - dev->ib_dev.get_port_immutable = mlx5_port_rep_immutable; - dev->ib_dev.query_port = mlx5_ib_rep_query_port; - + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_port_rep_ops); return 0; } +static const struct ib_device_ops mlx5_ib_dev_common_roce_ops = { + .create_rwq_ind_table = mlx5_ib_create_rwq_ind_table, + .create_wq = mlx5_ib_create_wq, + .destroy_rwq_ind_table = mlx5_ib_destroy_rwq_ind_table, + .destroy_wq = mlx5_ib_destroy_wq, + .get_netdev = mlx5_ib_get_netdev, + .modify_wq = mlx5_ib_modify_wq, +}; + static int mlx5_ib_stage_common_roce_init(struct mlx5_ib_dev *dev) { u8 port_num; @@ -5994,19 +6030,13 @@ static int mlx5_ib_stage_common_roce_init(struct mlx5_ib_dev *dev) dev->roce[i].last_port_state = IB_PORT_DOWN; } - dev->ib_dev.get_netdev = mlx5_ib_get_netdev; - dev->ib_dev.create_wq = mlx5_ib_create_wq; - dev->ib_dev.modify_wq = mlx5_ib_modify_wq; - dev->ib_dev.destroy_wq = mlx5_ib_destroy_wq; - dev->ib_dev.create_rwq_ind_table = mlx5_ib_create_rwq_ind_table; - dev->ib_dev.destroy_rwq_ind_table = mlx5_ib_destroy_rwq_ind_table; - dev->ib_dev.uverbs_ex_cmd_mask |= (1ull << IB_USER_VERBS_EX_CMD_CREATE_WQ) | (1ull << IB_USER_VERBS_EX_CMD_MODIFY_WQ) | (1ull << IB_USER_VERBS_EX_CMD_DESTROY_WQ) | (1ull << IB_USER_VERBS_EX_CMD_CREATE_RWQ_IND_TBL) | (1ull << IB_USER_VERBS_EX_CMD_DESTROY_RWQ_IND_TBL); + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_common_roce_ops); port_num = mlx5_core_native_port_num(dev->mdev) - 1; @@ -6105,11 +6135,15 @@ void mlx5_ib_stage_odp_cleanup(struct mlx5_ib_dev *dev) mlx5_ib_odp_cleanup_one(dev); } +static const struct ib_device_ops mlx5_ib_dev_hw_stats_ops = { + .alloc_hw_stats = mlx5_ib_alloc_hw_stats, + .get_hw_stats = mlx5_ib_get_hw_stats, +}; + int mlx5_ib_stage_counters_init(struct mlx5_ib_dev *dev) { if (MLX5_CAP_GEN(dev->mdev, max_qp_cnt)) { - dev->ib_dev.get_hw_stats = mlx5_ib_get_hw_stats; - dev->ib_dev.alloc_hw_stats = mlx5_ib_alloc_hw_stats; + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_hw_stats_ops); return mlx5_ib_alloc_counters(dev); } From patchwork Mon Dec 10 19:09:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722271 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2CFC618E8 for ; Mon, 10 Dec 2018 19:10:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1DBB32A724 for ; Mon, 10 Dec 2018 19:10:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 105322A954; Mon, 10 Dec 2018 19:10:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 736572A7BC for ; Mon, 10 Dec 2018 19:10:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727999AbeLJTKg (ORCPT ); Mon, 10 Dec 2018 14:10:36 -0500 Received: from mail-wm1-f67.google.com ([209.85.128.67]:35247 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727663AbeLJTKf (ORCPT ); Mon, 10 Dec 2018 14:10:35 -0500 Received: by mail-wm1-f67.google.com with SMTP id c126so12449235wmh.0 for ; Mon, 10 Dec 2018 11:10:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nfKEXqsrjIIIn0UQOaUD+nRboJl/I27ddTXyrhdTYk4=; b=grluIOrbp3Py6k+UtRiAkJgpVkdIMoFfA8Z7suVX3SYD+Zg7jkWIDSIPM68H3mdOcS Je9ce7zYtzq4XB8A9VB0KI4bnf339SSNv48ufmvXRfSbPD8BHCgNjxEKIS5egKT336Bh KfmzK1IqEOEArf+CpOf5V9W59RwA1GuQcGm76uIgAWnAEAXJcN4/Kignkjn5czQx98aR nE2HSIqgw/LW4HXI1cxpH1LQ5paC9VuRrrc5BtgvAr3HGjSjveAI1wG5SPgYEqFAdFx6 2hOUO9bzjUopd0BRp15Wp2X/DWndrODbX+542JzSRkutqZps+M1cb8EEboekl39npB75 kS0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nfKEXqsrjIIIn0UQOaUD+nRboJl/I27ddTXyrhdTYk4=; b=uY/tE1USTGCyRYddIvGApxvjDNdWfr5M3FNNaQB0PxdZGZTE2ICXqrIo665qTJLgCB tlrsT/2Nr1QDhLkiLudf/Jm1NoxAM2z+/hDM/eS9Y6MtvflFVtmFCVipal9oo/PXYmQc puO7yNhYkhX4ScjDKyX9UlCI9K9ittF4SvMYF1ep0Kko3Q5SiJx12Xi44W7Qbl9BSDnC vRH4UdR5B8qV+j7X0P18//78Fb9lxGMNUZhucwmwfTpjH1ej9ZTYOLN+9VFZhbzWIIZt I/no1bt2olFck+9XDkhO11fJRgbKdeI/N3EwetjSZYNHV1WsAGoJSZpvwq8cZTSxr1xg V5qA== X-Gm-Message-State: AA+aEWZ2y8VlDGsuXpZGNo0lBL+aFblCBm7k+DtnItnZ7MvUts56Qx8+ KWJp5b+LH0l+1pjdg3iZpUE= X-Google-Smtp-Source: AFSGD/U7pYixmIPluiQ+ttbJkoEThC7rAN95ZyNZldNa9HXTPAo1pYNqrfqToovHeKTXhuYjFtQzZw== X-Received: by 2002:a1c:7706:: with SMTP id t6mr10948349wmi.57.1544469032660; Mon, 10 Dec 2018 11:10:32 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.31 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:32 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v5 11/20] RDMA/mthca: Initialize ib_device_ops struct Date: Mon, 10 Dec 2018 21:09:39 +0200 Message-Id: <20181210190948.6892-12-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/mthca/mthca_provider.c | 139 ++++++++++++------- 1 file changed, 88 insertions(+), 51 deletions(-) diff --git a/drivers/infiniband/hw/mthca/mthca_provider.c b/drivers/infiniband/hw/mthca/mthca_provider.c index 691c6f048938..c697ec54ea5f 100644 --- a/drivers/infiniband/hw/mthca/mthca_provider.c +++ b/drivers/infiniband/hw/mthca/mthca_provider.c @@ -1193,6 +1193,81 @@ static void get_dev_fw_str(struct ib_device *device, char *str) (int) dev->fw_ver & 0xffff); } +static const struct ib_device_ops mthca_dev_ops = { + .alloc_pd = mthca_alloc_pd, + .alloc_ucontext = mthca_alloc_ucontext, + .attach_mcast = mthca_multicast_attach, + .create_ah = mthca_ah_create, + .create_cq = mthca_create_cq, + .create_qp = mthca_create_qp, + .dealloc_pd = mthca_dealloc_pd, + .dealloc_ucontext = mthca_dealloc_ucontext, + .dereg_mr = mthca_dereg_mr, + .destroy_ah = mthca_ah_destroy, + .destroy_cq = mthca_destroy_cq, + .destroy_qp = mthca_destroy_qp, + .detach_mcast = mthca_multicast_detach, + .get_dev_fw_str = get_dev_fw_str, + .get_dma_mr = mthca_get_dma_mr, + .get_port_immutable = mthca_port_immutable, + .mmap = mthca_mmap_uar, + .modify_device = mthca_modify_device, + .modify_port = mthca_modify_port, + .modify_qp = mthca_modify_qp, + .poll_cq = mthca_poll_cq, + .process_mad = mthca_process_mad, + .query_ah = mthca_ah_query, + .query_device = mthca_query_device, + .query_gid = mthca_query_gid, + .query_pkey = mthca_query_pkey, + .query_port = mthca_query_port, + .query_qp = mthca_query_qp, + .reg_user_mr = mthca_reg_user_mr, + .resize_cq = mthca_resize_cq, +}; + +static const struct ib_device_ops mthca_dev_arbel_srq_ops = { + .create_srq = mthca_create_srq, + .destroy_srq = mthca_destroy_srq, + .modify_srq = mthca_modify_srq, + .post_srq_recv = mthca_arbel_post_srq_recv, + .query_srq = mthca_query_srq, +}; + +static const struct ib_device_ops mthca_dev_tavor_srq_ops = { + .create_srq = mthca_create_srq, + .destroy_srq = mthca_destroy_srq, + .modify_srq = mthca_modify_srq, + .post_srq_recv = mthca_tavor_post_srq_recv, + .query_srq = mthca_query_srq, +}; + +static const struct ib_device_ops mthca_dev_arbel_fmr_ops = { + .alloc_fmr = mthca_alloc_fmr, + .dealloc_fmr = mthca_dealloc_fmr, + .map_phys_fmr = mthca_arbel_map_phys_fmr, + .unmap_fmr = mthca_unmap_fmr, +}; + +static const struct ib_device_ops mthca_dev_tavor_fmr_ops = { + .alloc_fmr = mthca_alloc_fmr, + .dealloc_fmr = mthca_dealloc_fmr, + .map_phys_fmr = mthca_tavor_map_phys_fmr, + .unmap_fmr = mthca_unmap_fmr, +}; + +static const struct ib_device_ops mthca_dev_arbel_ops = { + .post_recv = mthca_arbel_post_receive, + .post_send = mthca_arbel_post_send, + .req_notify_cq = mthca_arbel_arm_cq, +}; + +static const struct ib_device_ops mthca_dev_tavor_ops = { + .post_recv = mthca_tavor_post_receive, + .post_send = mthca_tavor_post_send, + .req_notify_cq = mthca_tavor_arm_cq, +}; + int mthca_register_device(struct mthca_dev *dev) { int ret; @@ -1226,26 +1301,8 @@ int mthca_register_device(struct mthca_dev *dev) dev->ib_dev.phys_port_cnt = dev->limits.num_ports; dev->ib_dev.num_comp_vectors = 1; dev->ib_dev.dev.parent = &dev->pdev->dev; - dev->ib_dev.query_device = mthca_query_device; - dev->ib_dev.query_port = mthca_query_port; - dev->ib_dev.modify_device = mthca_modify_device; - dev->ib_dev.modify_port = mthca_modify_port; - dev->ib_dev.query_pkey = mthca_query_pkey; - dev->ib_dev.query_gid = mthca_query_gid; - dev->ib_dev.alloc_ucontext = mthca_alloc_ucontext; - dev->ib_dev.dealloc_ucontext = mthca_dealloc_ucontext; - dev->ib_dev.mmap = mthca_mmap_uar; - dev->ib_dev.alloc_pd = mthca_alloc_pd; - dev->ib_dev.dealloc_pd = mthca_dealloc_pd; - dev->ib_dev.create_ah = mthca_ah_create; - dev->ib_dev.query_ah = mthca_ah_query; - dev->ib_dev.destroy_ah = mthca_ah_destroy; if (dev->mthca_flags & MTHCA_FLAG_SRQ) { - dev->ib_dev.create_srq = mthca_create_srq; - dev->ib_dev.modify_srq = mthca_modify_srq; - dev->ib_dev.query_srq = mthca_query_srq; - dev->ib_dev.destroy_srq = mthca_destroy_srq; dev->ib_dev.uverbs_cmd_mask |= (1ull << IB_USER_VERBS_CMD_CREATE_SRQ) | (1ull << IB_USER_VERBS_CMD_MODIFY_SRQ) | @@ -1253,48 +1310,28 @@ int mthca_register_device(struct mthca_dev *dev) (1ull << IB_USER_VERBS_CMD_DESTROY_SRQ); if (mthca_is_memfree(dev)) - dev->ib_dev.post_srq_recv = mthca_arbel_post_srq_recv; + ib_set_device_ops(&dev->ib_dev, + &mthca_dev_arbel_srq_ops); else - dev->ib_dev.post_srq_recv = mthca_tavor_post_srq_recv; + ib_set_device_ops(&dev->ib_dev, + &mthca_dev_tavor_srq_ops); } - dev->ib_dev.create_qp = mthca_create_qp; - dev->ib_dev.modify_qp = mthca_modify_qp; - dev->ib_dev.query_qp = mthca_query_qp; - dev->ib_dev.destroy_qp = mthca_destroy_qp; - dev->ib_dev.create_cq = mthca_create_cq; - dev->ib_dev.resize_cq = mthca_resize_cq; - dev->ib_dev.destroy_cq = mthca_destroy_cq; - dev->ib_dev.poll_cq = mthca_poll_cq; - dev->ib_dev.get_dma_mr = mthca_get_dma_mr; - dev->ib_dev.reg_user_mr = mthca_reg_user_mr; - dev->ib_dev.dereg_mr = mthca_dereg_mr; - dev->ib_dev.get_port_immutable = mthca_port_immutable; - dev->ib_dev.get_dev_fw_str = get_dev_fw_str; - if (dev->mthca_flags & MTHCA_FLAG_FMR) { - dev->ib_dev.alloc_fmr = mthca_alloc_fmr; - dev->ib_dev.unmap_fmr = mthca_unmap_fmr; - dev->ib_dev.dealloc_fmr = mthca_dealloc_fmr; if (mthca_is_memfree(dev)) - dev->ib_dev.map_phys_fmr = mthca_arbel_map_phys_fmr; + ib_set_device_ops(&dev->ib_dev, + &mthca_dev_arbel_fmr_ops); else - dev->ib_dev.map_phys_fmr = mthca_tavor_map_phys_fmr; + ib_set_device_ops(&dev->ib_dev, + &mthca_dev_tavor_fmr_ops); } - dev->ib_dev.attach_mcast = mthca_multicast_attach; - dev->ib_dev.detach_mcast = mthca_multicast_detach; - dev->ib_dev.process_mad = mthca_process_mad; + ib_set_device_ops(&dev->ib_dev, &mthca_dev_ops); - if (mthca_is_memfree(dev)) { - dev->ib_dev.req_notify_cq = mthca_arbel_arm_cq; - dev->ib_dev.post_send = mthca_arbel_post_send; - dev->ib_dev.post_recv = mthca_arbel_post_receive; - } else { - dev->ib_dev.req_notify_cq = mthca_tavor_arm_cq; - dev->ib_dev.post_send = mthca_tavor_post_send; - dev->ib_dev.post_recv = mthca_tavor_post_receive; - } + if (mthca_is_memfree(dev)) + ib_set_device_ops(&dev->ib_dev, &mthca_dev_arbel_ops); + else + ib_set_device_ops(&dev->ib_dev, &mthca_dev_tavor_ops); mutex_init(&dev->cap_mask_mutex); From patchwork Mon Dec 10 19:09:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722273 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 99C8418A7 for ; Mon, 10 Dec 2018 19:10:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8BE3C2A72F for ; Mon, 10 Dec 2018 19:10:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 801522A8A6; Mon, 10 Dec 2018 19:10:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 14FC22A7BC for ; Mon, 10 Dec 2018 19:10:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729248AbeLJTKg (ORCPT ); Mon, 10 Dec 2018 14:10:36 -0500 Received: from mail-wr1-f66.google.com ([209.85.221.66]:39408 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729249AbeLJTKg (ORCPT ); Mon, 10 Dec 2018 14:10:36 -0500 Received: by mail-wr1-f66.google.com with SMTP id t27so11634999wra.6 for ; Mon, 10 Dec 2018 11:10:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ku9YISBIIKT/DPpyRRMGTd/2y9nwpSZeDUx+mAmV4M8=; b=b9xKAnJ1qNOIW8Q480wJTcHDB/dXa6N/uKZ5YZ2gJ/0bxiY0huUlgrm3pvBfe9KE2G PjcvlpJtCq1rBiZssM5JBGugSKsxTEGleh0mhyQM2G2sUaCB+TAXbV7yMaoYrrIZThE/ usGfBqWNs0S8UyNgpvPdC9/moYKmlWpjky4gqgolbWptuTurw68Ti7aZHpsNyQmVZGsw gMjcD0VwBvhY9JUxBhu6PaMIRxaUiWvdc9quEKN/tGX7pUfWq3XSSkss0zqnZ6kJ4HbT I394sByiLqvk4mCujl+vGgNPThiYoXL0AH+gD1JFf7Ibb6vnxX6WPM/ACadQIRsasNAS 1hXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ku9YISBIIKT/DPpyRRMGTd/2y9nwpSZeDUx+mAmV4M8=; b=jUv2Eo3OAQUecawu2vs0IKNvINPhPfLJ/nW0kbv+wKWjH212ZYBiECXsXM8i+yCElH LlPndJ0uI5Onv+grGbaA9qnpESmT8Kx4BjxNCsHroDcfk/ckq4/O3nBiwx3MJJZgQD6x 2ntKGBPTMZo0Y48fxhdL9VHGliSD6cLhTb3iMNa8qQhHYL/kBIqKCaXUS/2E316LgSJo fRy9aWbjPIaMttbCk44POb2c4GO3BD7Vo5p1GWaS1Hv0kt3+fylaXUdYOvw2O0IorcaR bE0CuiK/XyxY5WmX8oP1TmUgmZglQ3ZYHZPIF4ifZQPvp2ebaADdFqXmFeAh+vbhglxo OfEg== X-Gm-Message-State: AA+aEWapDS/dovfxs081fm7s12zoRNgZEEN9RVc1XbKACB0HN+hXPf9X 8lRA4+QtYyJOaldtPPb2OT0= X-Google-Smtp-Source: AFSGD/VgiAhy2S0yHCviAdVT1bg8eRbnA9hFc8C/vK/vcU6bhWKRuRVOcR7W+XrVhfS5Es3HNpMXQw== X-Received: by 2002:adf:d089:: with SMTP id y9mr11116737wrh.22.1544469034097; Mon, 10 Dec 2018 11:10:34 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.32 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:33 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v5 12/20] RDMA/nes: Initialize ib_device_ops struct Date: Mon, 10 Dec 2018 21:09:40 +0200 Message-Id: <20181210190948.6892-13-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/nes/nes_verbs.c | 67 ++++++++++++++------------- 1 file changed, 35 insertions(+), 32 deletions(-) diff --git a/drivers/infiniband/hw/nes/nes_verbs.c b/drivers/infiniband/hw/nes/nes_verbs.c index 92d1cadd4cfd..f9d510431900 100644 --- a/drivers/infiniband/hw/nes/nes_verbs.c +++ b/drivers/infiniband/hw/nes/nes_verbs.c @@ -3627,6 +3627,39 @@ static void get_dev_fw_str(struct ib_device *dev, char *str) (nesvnic->nesdev->nesadapter->firmware_version & 0x000000ff)); } +static const struct ib_device_ops nes_dev_ops = { + .alloc_mr = nes_alloc_mr, + .alloc_mw = nes_alloc_mw, + .alloc_pd = nes_alloc_pd, + .alloc_ucontext = nes_alloc_ucontext, + .create_cq = nes_create_cq, + .create_qp = nes_create_qp, + .dealloc_mw = nes_dealloc_mw, + .dealloc_pd = nes_dealloc_pd, + .dealloc_ucontext = nes_dealloc_ucontext, + .dereg_mr = nes_dereg_mr, + .destroy_cq = nes_destroy_cq, + .destroy_qp = nes_destroy_qp, + .drain_rq = nes_drain_rq, + .drain_sq = nes_drain_sq, + .get_dev_fw_str = get_dev_fw_str, + .get_dma_mr = nes_get_dma_mr, + .get_port_immutable = nes_port_immutable, + .map_mr_sg = nes_map_mr_sg, + .mmap = nes_mmap, + .modify_qp = nes_modify_qp, + .poll_cq = nes_poll_cq, + .post_recv = nes_post_recv, + .post_send = nes_post_send, + .query_device = nes_query_device, + .query_gid = nes_query_gid, + .query_pkey = nes_query_pkey, + .query_port = nes_query_port, + .query_qp = nes_query_qp, + .reg_user_mr = nes_reg_user_mr, + .req_notify_cq = nes_req_notify_cq, +}; + /** * nes_init_ofa_device */ @@ -3673,36 +3706,6 @@ struct nes_ib_device *nes_init_ofa_device(struct net_device *netdev) nesibdev->ibdev.phys_port_cnt = 1; nesibdev->ibdev.num_comp_vectors = 1; nesibdev->ibdev.dev.parent = &nesdev->pcidev->dev; - nesibdev->ibdev.query_device = nes_query_device; - nesibdev->ibdev.query_port = nes_query_port; - nesibdev->ibdev.query_pkey = nes_query_pkey; - nesibdev->ibdev.query_gid = nes_query_gid; - nesibdev->ibdev.alloc_ucontext = nes_alloc_ucontext; - nesibdev->ibdev.dealloc_ucontext = nes_dealloc_ucontext; - nesibdev->ibdev.mmap = nes_mmap; - nesibdev->ibdev.alloc_pd = nes_alloc_pd; - nesibdev->ibdev.dealloc_pd = nes_dealloc_pd; - nesibdev->ibdev.create_qp = nes_create_qp; - nesibdev->ibdev.modify_qp = nes_modify_qp; - nesibdev->ibdev.query_qp = nes_query_qp; - nesibdev->ibdev.destroy_qp = nes_destroy_qp; - nesibdev->ibdev.create_cq = nes_create_cq; - nesibdev->ibdev.destroy_cq = nes_destroy_cq; - nesibdev->ibdev.poll_cq = nes_poll_cq; - nesibdev->ibdev.get_dma_mr = nes_get_dma_mr; - nesibdev->ibdev.reg_user_mr = nes_reg_user_mr; - nesibdev->ibdev.dereg_mr = nes_dereg_mr; - nesibdev->ibdev.alloc_mw = nes_alloc_mw; - nesibdev->ibdev.dealloc_mw = nes_dealloc_mw; - - nesibdev->ibdev.alloc_mr = nes_alloc_mr; - nesibdev->ibdev.map_mr_sg = nes_map_mr_sg; - - nesibdev->ibdev.req_notify_cq = nes_req_notify_cq; - nesibdev->ibdev.post_send = nes_post_send; - nesibdev->ibdev.post_recv = nes_post_recv; - nesibdev->ibdev.drain_sq = nes_drain_sq; - nesibdev->ibdev.drain_rq = nes_drain_rq; nesibdev->ibdev.iwcm = kzalloc(sizeof(*nesibdev->ibdev.iwcm), GFP_KERNEL); if (nesibdev->ibdev.iwcm == NULL) { @@ -3717,8 +3720,8 @@ struct nes_ib_device *nes_init_ofa_device(struct net_device *netdev) nesibdev->ibdev.iwcm->reject = nes_reject; nesibdev->ibdev.iwcm->create_listen = nes_create_listen; nesibdev->ibdev.iwcm->destroy_listen = nes_destroy_listen; - nesibdev->ibdev.get_port_immutable = nes_port_immutable; - nesibdev->ibdev.get_dev_fw_str = get_dev_fw_str; + + ib_set_device_ops(&nesibdev->ibdev, &nes_dev_ops); memcpy(nesibdev->ibdev.iwcm->ifname, netdev->name, sizeof(nesibdev->ibdev.iwcm->ifname)); From patchwork Mon Dec 10 19:09:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722277 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C04DF18E8 for ; Mon, 10 Dec 2018 19:10:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B22612A752 for ; Mon, 10 Dec 2018 19:10:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A6BE22A9A1; Mon, 10 Dec 2018 19:10:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 238172A752 for ; Mon, 10 Dec 2018 19:10:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729265AbeLJTKi (ORCPT ); Mon, 10 Dec 2018 14:10:38 -0500 Received: from mail-wr1-f68.google.com ([209.85.221.68]:43993 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729250AbeLJTKh (ORCPT ); Mon, 10 Dec 2018 14:10:37 -0500 Received: by mail-wr1-f68.google.com with SMTP id r10so11634208wrs.10 for ; Mon, 10 Dec 2018 11:10:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3bLWyEvTRrea2IxXH+EIv075Ho4s8zzZzfsAJJna7MA=; b=A8KMy/ddTqmybYZAaMRNuBILFmN4BS3xGFrcFDgNwDpXeRf2Aw98KgY0LTV+5Ghnup y1BuLiLG8Y5DNrSaSyiKmQ/heYb9fmwTQzQ7Ep/8agMsIks/QfSnuci8OuIbBwTVHdQH P+uaJK4F3xtTqvwznVkM6YXIyZUHkj3ON0GXT68Yp+DmJqXXprXIyXq6jvCKKD2xBuT7 UWMHL1ai1vGlD3Mxt5bk8odKVILZpBZV1WybfIFEcR0SIcq28MPfMR243DyvCtYFLjpZ I0KtXaBnXR8Dvgtpiz+RZF/Ctvf1iOthHzhsj/V0gxdQN84W34k2oBaqxy3UkIvJdxQw U7JA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3bLWyEvTRrea2IxXH+EIv075Ho4s8zzZzfsAJJna7MA=; b=h4QgLLXPVLKe0ykzByHvxJqcNJi9R4JGJ+sk0y+NXec1BEgrcXrHxBQ3MtST91WGpp VPqA7toyc5ikzG8Mk19s8FwZ0SyCMa0rUWtOk789uDNEs1rOfYMohIfaITpLTXMwKSt1 yzDRR1wX+3mptlh5n9o0eEwCoIpkvxzTIBG9JObX0/AYyRo5nq8eThRqN8Zt15AfbvDO MgvDyIfWxweVoKx1PtvmvR6wSEupjKt2844lwvynzHRrPnwdSFZUAHpwOjFvRI82Xfze xu8Mozv3grj1D5C4/Icm6L75VwQ9ypUqFEHNMiCPX6NaOOu8mGY4nJdqGMdAmB9SW7+h qqow== X-Gm-Message-State: AA+aEWbHXdmrVk9k14n4yYzV4P4H8W+846rMQRu07ArbPRrJhGsod82T IIN4GyF8+G5AZgDjln4hD9I= X-Google-Smtp-Source: AFSGD/VHcR3UYazmWF50+a8if8lMVT9FELh0NYmh/GpiNV2PlBS9M8IvD34Dj58I/zioSvDyvURCrA== X-Received: by 2002:adf:e707:: with SMTP id c7mr10307865wrm.196.1544469035243; Mon, 10 Dec 2018 11:10:35 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.34 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:34 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v5 13/20] RDMA/ocrdma: Initialize ib_device_ops struct Date: Mon, 10 Dec 2018 21:09:41 +0200 Message-Id: <20181210190948.6892-14-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/ocrdma/ocrdma_main.c | 92 +++++++++++----------- 1 file changed, 46 insertions(+), 46 deletions(-) diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_main.c b/drivers/infiniband/hw/ocrdma/ocrdma_main.c index 873cc7f6fe61..1f393842453a 100644 --- a/drivers/infiniband/hw/ocrdma/ocrdma_main.c +++ b/drivers/infiniband/hw/ocrdma/ocrdma_main.c @@ -143,6 +143,50 @@ static const struct attribute_group ocrdma_attr_group = { .attrs = ocrdma_attributes, }; +static const struct ib_device_ops ocrdma_dev_ops = { + .alloc_mr = ocrdma_alloc_mr, + .alloc_pd = ocrdma_alloc_pd, + .alloc_ucontext = ocrdma_alloc_ucontext, + .create_ah = ocrdma_create_ah, + .create_cq = ocrdma_create_cq, + .create_qp = ocrdma_create_qp, + .dealloc_pd = ocrdma_dealloc_pd, + .dealloc_ucontext = ocrdma_dealloc_ucontext, + .dereg_mr = ocrdma_dereg_mr, + .destroy_ah = ocrdma_destroy_ah, + .destroy_cq = ocrdma_destroy_cq, + .destroy_qp = ocrdma_destroy_qp, + .get_dev_fw_str = get_dev_fw_str, + .get_dma_mr = ocrdma_get_dma_mr, + .get_link_layer = ocrdma_link_layer, + .get_netdev = ocrdma_get_netdev, + .get_port_immutable = ocrdma_port_immutable, + .map_mr_sg = ocrdma_map_mr_sg, + .mmap = ocrdma_mmap, + .modify_port = ocrdma_modify_port, + .modify_qp = ocrdma_modify_qp, + .poll_cq = ocrdma_poll_cq, + .post_recv = ocrdma_post_recv, + .post_send = ocrdma_post_send, + .process_mad = ocrdma_process_mad, + .query_ah = ocrdma_query_ah, + .query_device = ocrdma_query_device, + .query_pkey = ocrdma_query_pkey, + .query_port = ocrdma_query_port, + .query_qp = ocrdma_query_qp, + .reg_user_mr = ocrdma_reg_user_mr, + .req_notify_cq = ocrdma_arm_cq, + .resize_cq = ocrdma_resize_cq, +}; + +static const struct ib_device_ops ocrdma_dev_srq_ops = { + .create_srq = ocrdma_create_srq, + .destroy_srq = ocrdma_destroy_srq, + .modify_srq = ocrdma_modify_srq, + .post_srq_recv = ocrdma_post_srq_recv, + .query_srq = ocrdma_query_srq, +}; + static int ocrdma_register_device(struct ocrdma_dev *dev) { ocrdma_get_guid(dev, (u8 *)&dev->ibdev.node_guid); @@ -182,50 +226,10 @@ static int ocrdma_register_device(struct ocrdma_dev *dev) dev->ibdev.phys_port_cnt = 1; dev->ibdev.num_comp_vectors = dev->eq_cnt; - /* mandatory verbs. */ - dev->ibdev.query_device = ocrdma_query_device; - dev->ibdev.query_port = ocrdma_query_port; - dev->ibdev.modify_port = ocrdma_modify_port; - dev->ibdev.get_netdev = ocrdma_get_netdev; - dev->ibdev.get_link_layer = ocrdma_link_layer; - dev->ibdev.alloc_pd = ocrdma_alloc_pd; - dev->ibdev.dealloc_pd = ocrdma_dealloc_pd; - - dev->ibdev.create_cq = ocrdma_create_cq; - dev->ibdev.destroy_cq = ocrdma_destroy_cq; - dev->ibdev.resize_cq = ocrdma_resize_cq; - - dev->ibdev.create_qp = ocrdma_create_qp; - dev->ibdev.modify_qp = ocrdma_modify_qp; - dev->ibdev.query_qp = ocrdma_query_qp; - dev->ibdev.destroy_qp = ocrdma_destroy_qp; - - dev->ibdev.query_pkey = ocrdma_query_pkey; - dev->ibdev.create_ah = ocrdma_create_ah; - dev->ibdev.destroy_ah = ocrdma_destroy_ah; - dev->ibdev.query_ah = ocrdma_query_ah; - - dev->ibdev.poll_cq = ocrdma_poll_cq; - dev->ibdev.post_send = ocrdma_post_send; - dev->ibdev.post_recv = ocrdma_post_recv; - dev->ibdev.req_notify_cq = ocrdma_arm_cq; - - dev->ibdev.get_dma_mr = ocrdma_get_dma_mr; - dev->ibdev.dereg_mr = ocrdma_dereg_mr; - dev->ibdev.reg_user_mr = ocrdma_reg_user_mr; - - dev->ibdev.alloc_mr = ocrdma_alloc_mr; - dev->ibdev.map_mr_sg = ocrdma_map_mr_sg; - /* mandatory to support user space verbs consumer. */ - dev->ibdev.alloc_ucontext = ocrdma_alloc_ucontext; - dev->ibdev.dealloc_ucontext = ocrdma_dealloc_ucontext; - dev->ibdev.mmap = ocrdma_mmap; dev->ibdev.dev.parent = &dev->nic_info.pdev->dev; - dev->ibdev.process_mad = ocrdma_process_mad; - dev->ibdev.get_port_immutable = ocrdma_port_immutable; - dev->ibdev.get_dev_fw_str = get_dev_fw_str; + ib_set_device_ops(&dev->ibdev, &ocrdma_dev_ops); if (ocrdma_get_asic_type(dev) == OCRDMA_ASIC_GEN_SKH_R) { dev->ibdev.uverbs_cmd_mask |= @@ -235,11 +239,7 @@ static int ocrdma_register_device(struct ocrdma_dev *dev) OCRDMA_UVERBS(DESTROY_SRQ) | OCRDMA_UVERBS(POST_SRQ_RECV); - dev->ibdev.create_srq = ocrdma_create_srq; - dev->ibdev.modify_srq = ocrdma_modify_srq; - dev->ibdev.query_srq = ocrdma_query_srq; - dev->ibdev.destroy_srq = ocrdma_destroy_srq; - dev->ibdev.post_srq_recv = ocrdma_post_srq_recv; + ib_set_device_ops(&dev->ibdev, &ocrdma_dev_srq_ops); } rdma_set_device_sysfs_group(&dev->ibdev, &ocrdma_attr_group); dev->ibdev.driver_id = RDMA_DRIVER_OCRDMA; From patchwork Mon Dec 10 19:09:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722275 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 07B8318E8 for ; Mon, 10 Dec 2018 19:10:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ECF8C2A779 for ; Mon, 10 Dec 2018 19:10:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E10DC2AC68; Mon, 10 Dec 2018 19:10:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 598462A88D for ; Mon, 10 Dec 2018 19:10:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729270AbeLJTKj (ORCPT ); Mon, 10 Dec 2018 14:10:39 -0500 Received: from mail-wr1-f65.google.com ([209.85.221.65]:39411 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727819AbeLJTKi (ORCPT ); Mon, 10 Dec 2018 14:10:38 -0500 Received: by mail-wr1-f65.google.com with SMTP id t27so11635090wra.6 for ; Mon, 10 Dec 2018 11:10:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YyfnQDDaAxeqi7xIZpoc0bjAwzCts4ci6v9HVXO3R8w=; b=iwZW21XGDw1aaa6iCrzoApuNxIU8WLmMZPv/tW++entp9lj2dcPkeIHCsIXxP4LDKE 3HDKs9RVFdS9BT1wy+IKZBcJRCYfdfBRigXv/3hZ/wOrLm/ycneyo/DeCWE6jPwOX2tw 77NKt9wLUObk7AcdCZ6bx3Y3Q0f9vwZY2W5bbmsEq9jz67ddGt2YE/HqG60R3bNxaXNa U2g+r2MLaWlOu1PKX+OoX8B7q8itUF2X+Ve+R4VE4HysidpKNeb9601WgdOGStb8m8l3 YUqiqDx7dl92M5q8Uymwgmszo7ctXd9OUZe8otV0wqoR0e16TILAOFtKW1Coe8yLnQ0S CzWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YyfnQDDaAxeqi7xIZpoc0bjAwzCts4ci6v9HVXO3R8w=; b=FkQPEqca5sqJ9EHWWzf0Mu40Bce1FQ00vj8UW2fZ3nhBVs2wy14eU2gY1i55w6wqe4 f6oD/LJF8u+0STqo7pKqdgSpSclCQDqk6GI/stuasK2B42tsgFVKeXeJ6o27J6/pNEcW l+1gl5iN5i7QDMfyBajYizuDFWYoN/ajNz8n7xqKjdIrN0xmRSB4FyveehaarQum+C43 u9b0+iWwcB5iJjcVYbbGBMbIHWUVjqh0gIGt8Mq8NyzY2apeG3gt2sSbRtTLYEB8UiG3 Ip+yjho5CvFSXrOXOr0aVhIwPSnhUj/ze+9/U0o70udGvru3ingW/W+wvKmRVW9Jy1ig aJGg== X-Gm-Message-State: AA+aEWaruJA/jKGw3/qvtgBGrZi9diYDXJQ/Bg1u6t0MNgP8AEmIgAhs 4kkFCQMDE7usdPP1hKrS2ic= X-Google-Smtp-Source: AFSGD/XjGpqJ9Nbcea3cCp/t33jDgpCVaSadVSR8LUanRacR2BIDoVWJLFYds3vmXsdZ8xcqYp0dBg== X-Received: by 2002:adf:f189:: with SMTP id h9mr10834245wro.35.1544469036441; Mon, 10 Dec 2018 11:10:36 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.35 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:36 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v5 14/20] RDMA/qedr: Initialize ib_device_ops struct Date: Mon, 10 Dec 2018 21:09:42 +0200 Message-Id: <20181210190948.6892-15-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/qedr/main.c | 103 +++++++++++++++--------------- 1 file changed, 52 insertions(+), 51 deletions(-) diff --git a/drivers/infiniband/hw/qedr/main.c b/drivers/infiniband/hw/qedr/main.c index 8d6ff9df49fe..75940e2a8791 100644 --- a/drivers/infiniband/hw/qedr/main.c +++ b/drivers/infiniband/hw/qedr/main.c @@ -160,12 +160,16 @@ static const struct attribute_group qedr_attr_group = { .attrs = qedr_attributes, }; +static const struct ib_device_ops qedr_iw_dev_ops = { + .get_port_immutable = qedr_iw_port_immutable, + .query_gid = qedr_iw_query_gid, +}; + static int qedr_iw_register_device(struct qedr_dev *dev) { dev->ibdev.node_type = RDMA_NODE_RNIC; - dev->ibdev.query_gid = qedr_iw_query_gid; - dev->ibdev.get_port_immutable = qedr_iw_port_immutable; + ib_set_device_ops(&dev->ibdev, &qedr_iw_dev_ops); dev->ibdev.iwcm = kzalloc(sizeof(*dev->ibdev.iwcm), GFP_KERNEL); if (!dev->ibdev.iwcm) @@ -186,13 +190,56 @@ static int qedr_iw_register_device(struct qedr_dev *dev) return 0; } +static const struct ib_device_ops qedr_roce_dev_ops = { + .get_port_immutable = qedr_roce_port_immutable, +}; + static void qedr_roce_register_device(struct qedr_dev *dev) { dev->ibdev.node_type = RDMA_NODE_IB_CA; - dev->ibdev.get_port_immutable = qedr_roce_port_immutable; + ib_set_device_ops(&dev->ibdev, &qedr_roce_dev_ops); } +static const struct ib_device_ops qedr_dev_ops = { + .alloc_mr = qedr_alloc_mr, + .alloc_pd = qedr_alloc_pd, + .alloc_ucontext = qedr_alloc_ucontext, + .create_ah = qedr_create_ah, + .create_cq = qedr_create_cq, + .create_qp = qedr_create_qp, + .create_srq = qedr_create_srq, + .dealloc_pd = qedr_dealloc_pd, + .dealloc_ucontext = qedr_dealloc_ucontext, + .dereg_mr = qedr_dereg_mr, + .destroy_ah = qedr_destroy_ah, + .destroy_cq = qedr_destroy_cq, + .destroy_qp = qedr_destroy_qp, + .destroy_srq = qedr_destroy_srq, + .get_dev_fw_str = qedr_get_dev_fw_str, + .get_dma_mr = qedr_get_dma_mr, + .get_link_layer = qedr_link_layer, + .get_netdev = qedr_get_netdev, + .map_mr_sg = qedr_map_mr_sg, + .mmap = qedr_mmap, + .modify_port = qedr_modify_port, + .modify_qp = qedr_modify_qp, + .modify_srq = qedr_modify_srq, + .poll_cq = qedr_poll_cq, + .post_recv = qedr_post_recv, + .post_send = qedr_post_send, + .post_srq_recv = qedr_post_srq_recv, + .process_mad = qedr_process_mad, + .query_device = qedr_query_device, + .query_pkey = qedr_query_pkey, + .query_port = qedr_query_port, + .query_qp = qedr_query_qp, + .query_srq = qedr_query_srq, + .reg_user_mr = qedr_reg_user_mr, + .req_notify_cq = qedr_arm_cq, + .resize_cq = qedr_resize_cq, +}; + static int qedr_register_device(struct qedr_dev *dev) { int rc; @@ -237,57 +284,11 @@ static int qedr_register_device(struct qedr_dev *dev) dev->ibdev.phys_port_cnt = 1; dev->ibdev.num_comp_vectors = dev->num_cnq; - - dev->ibdev.query_device = qedr_query_device; - dev->ibdev.query_port = qedr_query_port; - dev->ibdev.modify_port = qedr_modify_port; - - dev->ibdev.alloc_ucontext = qedr_alloc_ucontext; - dev->ibdev.dealloc_ucontext = qedr_dealloc_ucontext; - dev->ibdev.mmap = qedr_mmap; - - dev->ibdev.alloc_pd = qedr_alloc_pd; - dev->ibdev.dealloc_pd = qedr_dealloc_pd; - - dev->ibdev.create_cq = qedr_create_cq; - dev->ibdev.destroy_cq = qedr_destroy_cq; - dev->ibdev.resize_cq = qedr_resize_cq; - dev->ibdev.req_notify_cq = qedr_arm_cq; - - dev->ibdev.create_qp = qedr_create_qp; - dev->ibdev.modify_qp = qedr_modify_qp; - dev->ibdev.query_qp = qedr_query_qp; - dev->ibdev.destroy_qp = qedr_destroy_qp; - - dev->ibdev.create_srq = qedr_create_srq; - dev->ibdev.destroy_srq = qedr_destroy_srq; - dev->ibdev.modify_srq = qedr_modify_srq; - dev->ibdev.query_srq = qedr_query_srq; - dev->ibdev.post_srq_recv = qedr_post_srq_recv; - dev->ibdev.query_pkey = qedr_query_pkey; - - dev->ibdev.create_ah = qedr_create_ah; - dev->ibdev.destroy_ah = qedr_destroy_ah; - - dev->ibdev.get_dma_mr = qedr_get_dma_mr; - dev->ibdev.dereg_mr = qedr_dereg_mr; - dev->ibdev.reg_user_mr = qedr_reg_user_mr; - dev->ibdev.alloc_mr = qedr_alloc_mr; - dev->ibdev.map_mr_sg = qedr_map_mr_sg; - - dev->ibdev.poll_cq = qedr_poll_cq; - dev->ibdev.post_send = qedr_post_send; - dev->ibdev.post_recv = qedr_post_recv; - - dev->ibdev.process_mad = qedr_process_mad; - - dev->ibdev.get_netdev = qedr_get_netdev; - dev->ibdev.dev.parent = &dev->pdev->dev; - dev->ibdev.get_link_layer = qedr_link_layer; - dev->ibdev.get_dev_fw_str = qedr_get_dev_fw_str; rdma_set_device_sysfs_group(&dev->ibdev, &qedr_attr_group); + ib_set_device_ops(&dev->ibdev, &qedr_dev_ops); + dev->ibdev.driver_id = RDMA_DRIVER_QEDR; return ib_register_device(&dev->ibdev, "qedr%d", NULL); } From patchwork Mon Dec 10 19:09:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722287 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8973D69B1 for ; Mon, 10 Dec 2018 19:10:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7A7892A7EE for ; Mon, 10 Dec 2018 19:10:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6EB682AD4E; Mon, 10 Dec 2018 19:10:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 16B2D2A87C for ; Mon, 10 Dec 2018 19:10:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729249AbeLJTKu (ORCPT ); Mon, 10 Dec 2018 14:10:50 -0500 Received: from mail-wm1-f68.google.com ([209.85.128.68]:50492 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729269AbeLJTKk (ORCPT ); Mon, 10 Dec 2018 14:10:40 -0500 Received: by mail-wm1-f68.google.com with SMTP id n190so12188622wmd.0 for ; Mon, 10 Dec 2018 11:10:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=g2FFcAiFViuiHiShiH+fAOZeER6O+66B0q7mHq2JeSw=; b=EIadmlK8uOSzVNycNcIRSiqhj5q9xae6P+PuwpuYaB+SxIbhWSu+eBjWV8/2wa+JaG D1JBQJfo7TOYOIhPY6L34kT2vezg/+pB579YFG07LryCvcmiju4cuJTmrAeV21FEEsfz 5ZzuiML4McdStcTVVDJaRVpPBsyUZQI991FtPwua7qvTFAT5kLQeK797B8umEj0Os0bD sVNaZJ8qqeiqcwZMmTYLfziSSUfiq+QrWuY9ql9VaAkEVEyGA9AoiPXO+U+2ePKU3527 nO7gSleceokgqPHpWij43CxCzSXiofXVsGGpCvuiMPRzh9NCokXLnnYOOSlMH9ZGsCy0 Cepg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=g2FFcAiFViuiHiShiH+fAOZeER6O+66B0q7mHq2JeSw=; b=gGU1nZhx2x4uWx6ceFf9EBcN6jh6lNKbnUeOkcRGfrlUYM+8MHKtOt1nh/voUliX2V zE0O9u2zvysa80D/MsHRTQE82AISwNhv/yFY4V9K8FAwyw+JW47kumava060o4wX1E3z jVBq6TRXyfTd6HjsShjPVNiGLTLmVM/TaMcR41RUrtHb7mD+LlCoFDwWCezhvAB0gXiA 6QK5Hd6J/2pWL1p0KsXQEJ1aaEUlIN+VWdjY6EUVa9GgvILTIPB+2gskVlHVtKKrwDD+ xXN28qRYHp8Hw8X35dMBlj4flpw2it6G40zYeVmNcQgnA3nRUOLQ+zjx0zC5P/TOpvrS 6q4g== X-Gm-Message-State: AA+aEWa2oSN+YEcoC67Q5V+vgtuCGT66ItCjqbGhWjoPLbTySuTSLw4E V2QZQJzVI4IHBiDiGMj7ej0P3EFf X-Google-Smtp-Source: AFSGD/V6dxN4y8z2ImT1FCucvdXQNO2NFwbiiPKjHly8cgFK1FHtVWRbm/pwR7S6dS2+Kzb2jv1nKg== X-Received: by 2002:a7b:cb96:: with SMTP id m22mr12172787wmi.39.1544469037557; Mon, 10 Dec 2018 11:10:37 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.36 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:37 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v5 15/20] RDMA/qib: Initialize ib_device_ops struct Date: Mon, 10 Dec 2018 21:09:43 +0200 Message-Id: <20181210190948.6892-16-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/qib/qib_verbs.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/hw/qib/qib_verbs.c b/drivers/infiniband/hw/qib/qib_verbs.c index 8914abdd7584..611a6b5ef83f 100644 --- a/drivers/infiniband/hw/qib/qib_verbs.c +++ b/drivers/infiniband/hw/qib/qib_verbs.c @@ -1493,6 +1493,11 @@ static void qib_fill_device_attr(struct qib_devdata *dd) dd->verbs_dev.rdi.wc_opcode = ib_qib_wc_opcode; } +static const struct ib_device_ops qib_dev_ops = { + .modify_device = qib_modify_device, + .process_mad = qib_process_mad, +}; + /** * qib_register_ib_device - register our device with the infiniband core * @dd: the device data structure @@ -1555,8 +1560,6 @@ int qib_register_ib_device(struct qib_devdata *dd) ibdev->node_guid = ppd->guid; ibdev->phys_port_cnt = dd->num_pports; ibdev->dev.parent = &dd->pcidev->dev; - ibdev->modify_device = qib_modify_device; - ibdev->process_mad = qib_process_mad; snprintf(ibdev->node_desc, sizeof(ibdev->node_desc), "Intel Infiniband HCA %s", init_utsname()->nodename); @@ -1624,6 +1627,7 @@ int qib_register_ib_device(struct qib_devdata *dd) } rdma_set_device_sysfs_group(&dd->verbs_dev.rdi.ibdev, &qib_attr_group); + ib_set_device_ops(ibdev, &qib_dev_ops); ret = rvt_register_device(&dd->verbs_dev.rdi, RDMA_DRIVER_QIB); if (ret) goto err_tx; From patchwork Mon Dec 10 19:09:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722285 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 61ED318A7 for ; Mon, 10 Dec 2018 19:10:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 52F012A88D for ; Mon, 10 Dec 2018 19:10:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 475932AC87; Mon, 10 Dec 2018 19:10:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AB24F2AE25 for ; Mon, 10 Dec 2018 19:10:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728401AbeLJTKu (ORCPT ); Mon, 10 Dec 2018 14:10:50 -0500 Received: from mail-wm1-f67.google.com ([209.85.128.67]:40738 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729249AbeLJTKk (ORCPT ); Mon, 10 Dec 2018 14:10:40 -0500 Received: by mail-wm1-f67.google.com with SMTP id q26so12421207wmf.5 for ; Mon, 10 Dec 2018 11:10:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FIATS5KgebwNNi73rGnlJrMssanoahaOI7W0aKPq+hg=; b=bzf/MSZ5Z9lchm37slRFGezUIcEkZhLMB3/qpq2VPXvvWTt32WPcpFUKjpimoVTEQp Dk4j7FxSOCvBEVIPcAQthH0M0XmU3WO09M8TD3gwbUe2Yx4upV7aWwy6EF9hCg+CcsOO UFCYGiPcit0/js8bHWroxPktIPUAuTeFdp3wP9x+gxFVDW+Mu9PkBEveuRDcLG500Mgc jyOz826tT6WQZWBOPCHMJGaV6VYJ9epVpkqtZx9v0gNFfzjlEebscSUKCs8CFSlCKR8l Wi2dVzSPvpbbORalN8nNg9aLXzn2C+AoPWj10izw3X9NWyVLjt9Sf9C6oM2sfs9+Smmc K28A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FIATS5KgebwNNi73rGnlJrMssanoahaOI7W0aKPq+hg=; b=jchY84DbfbEVSzpUcy17LWP8+5R1BDe4ZZ2NUlZEpddHrfCymrP+S2tkfdDb/srYvD TGEVIKlvAuu9D2J9O6g2owXkfxcAqeyIEBJMsSNCe1kVnOfuExYpmyVpI0qbPhWVGV80 ebeRzlTmCpyaUGIi8pqMqvxLd3mzEYKMuahmGxikQ9li89jKck+VX3LUoAtXbGhtnmWV 2i5+VS7A5BhEU+uL6fllLxQuqmiKqy2HV/TRT4nTarD/Hw2+o0FzAaIgdztFvlI/04PR exU+gM1/L7uBg8oN+9v2tcxgZlTYNIxpfrooiUqw9aG21I3/qcnOVlM+D+6EIuDyGxrY 2LVQ== X-Gm-Message-State: AA+aEWYr2PLQsVd2NtjBeOdh4pJO2OMFH4VB5dAJLhetwmF+cNuMYQDr q+pt7dEsdOAvs4HQY5Qm8J4= X-Google-Smtp-Source: AFSGD/W3q3RrnKftDSIfGZQYjCsjNXAcQ3/Izg2hcRdBTuz/+lh9gf7q/OloimT5p1+0TAtIRanE8A== X-Received: by 2002:a1c:87cc:: with SMTP id j195mr11534851wmd.2.1544469038771; Mon, 10 Dec 2018 11:10:38 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.37 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:38 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v5 16/20] RDMA/usnic: Initialize ib_device_ops struct Date: Mon, 10 Dec 2018 21:09:44 +0200 Message-Id: <20181210190948.6892-17-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/usnic/usnic_ib_main.c | 61 +++++++++++---------- 1 file changed, 32 insertions(+), 29 deletions(-) diff --git a/drivers/infiniband/hw/usnic/usnic_ib_main.c b/drivers/infiniband/hw/usnic/usnic_ib_main.c index 413fa5732e2b..b2323a52a0dd 100644 --- a/drivers/infiniband/hw/usnic/usnic_ib_main.c +++ b/drivers/infiniband/hw/usnic/usnic_ib_main.c @@ -330,6 +330,37 @@ static void usnic_get_dev_fw_str(struct ib_device *device, char *str) snprintf(str, IB_FW_VERSION_NAME_MAX, "%s", info.fw_version); } +static const struct ib_device_ops usnic_dev_ops = { + .alloc_pd = usnic_ib_alloc_pd, + .alloc_ucontext = usnic_ib_alloc_ucontext, + .create_ah = usnic_ib_create_ah, + .create_cq = usnic_ib_create_cq, + .create_qp = usnic_ib_create_qp, + .dealloc_pd = usnic_ib_dealloc_pd, + .dealloc_ucontext = usnic_ib_dealloc_ucontext, + .dereg_mr = usnic_ib_dereg_mr, + .destroy_ah = usnic_ib_destroy_ah, + .destroy_cq = usnic_ib_destroy_cq, + .destroy_qp = usnic_ib_destroy_qp, + .get_dev_fw_str = usnic_get_dev_fw_str, + .get_dma_mr = usnic_ib_get_dma_mr, + .get_link_layer = usnic_ib_port_link_layer, + .get_netdev = usnic_get_netdev, + .get_port_immutable = usnic_port_immutable, + .mmap = usnic_ib_mmap, + .modify_qp = usnic_ib_modify_qp, + .poll_cq = usnic_ib_poll_cq, + .post_recv = usnic_ib_post_recv, + .post_send = usnic_ib_post_send, + .query_device = usnic_ib_query_device, + .query_gid = usnic_ib_query_gid, + .query_pkey = usnic_ib_query_pkey, + .query_port = usnic_ib_query_port, + .query_qp = usnic_ib_query_qp, + .reg_user_mr = usnic_ib_reg_mr, + .req_notify_cq = usnic_ib_req_notify_cq, +}; + /* Start of PF discovery section */ static void *usnic_ib_device_add(struct pci_dev *dev) { @@ -386,35 +417,7 @@ static void *usnic_ib_device_add(struct pci_dev *dev) (1ull << IB_USER_VERBS_CMD_DETACH_MCAST) | (1ull << IB_USER_VERBS_CMD_OPEN_QP); - us_ibdev->ib_dev.query_device = usnic_ib_query_device; - us_ibdev->ib_dev.query_port = usnic_ib_query_port; - us_ibdev->ib_dev.query_pkey = usnic_ib_query_pkey; - us_ibdev->ib_dev.query_gid = usnic_ib_query_gid; - us_ibdev->ib_dev.get_netdev = usnic_get_netdev; - us_ibdev->ib_dev.get_link_layer = usnic_ib_port_link_layer; - us_ibdev->ib_dev.alloc_pd = usnic_ib_alloc_pd; - us_ibdev->ib_dev.dealloc_pd = usnic_ib_dealloc_pd; - us_ibdev->ib_dev.create_qp = usnic_ib_create_qp; - us_ibdev->ib_dev.modify_qp = usnic_ib_modify_qp; - us_ibdev->ib_dev.query_qp = usnic_ib_query_qp; - us_ibdev->ib_dev.destroy_qp = usnic_ib_destroy_qp; - us_ibdev->ib_dev.create_cq = usnic_ib_create_cq; - us_ibdev->ib_dev.destroy_cq = usnic_ib_destroy_cq; - us_ibdev->ib_dev.reg_user_mr = usnic_ib_reg_mr; - us_ibdev->ib_dev.dereg_mr = usnic_ib_dereg_mr; - us_ibdev->ib_dev.alloc_ucontext = usnic_ib_alloc_ucontext; - us_ibdev->ib_dev.dealloc_ucontext = usnic_ib_dealloc_ucontext; - us_ibdev->ib_dev.mmap = usnic_ib_mmap; - us_ibdev->ib_dev.create_ah = usnic_ib_create_ah; - us_ibdev->ib_dev.destroy_ah = usnic_ib_destroy_ah; - us_ibdev->ib_dev.post_send = usnic_ib_post_send; - us_ibdev->ib_dev.post_recv = usnic_ib_post_recv; - us_ibdev->ib_dev.poll_cq = usnic_ib_poll_cq; - us_ibdev->ib_dev.req_notify_cq = usnic_ib_req_notify_cq; - us_ibdev->ib_dev.get_dma_mr = usnic_ib_get_dma_mr; - us_ibdev->ib_dev.get_port_immutable = usnic_port_immutable; - us_ibdev->ib_dev.get_dev_fw_str = usnic_get_dev_fw_str; - + ib_set_device_ops(&us_ibdev->ib_dev, &usnic_dev_ops); us_ibdev->ib_dev.driver_id = RDMA_DRIVER_USNIC; rdma_set_device_sysfs_group(&us_ibdev->ib_dev, &usnic_attr_group); From patchwork Mon Dec 10 19:09:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722279 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7C8F718A7 for ; Mon, 10 Dec 2018 19:10:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6E8012A88D for ; Mon, 10 Dec 2018 19:10:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 62DCD2AE25; Mon, 10 Dec 2018 19:10:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DC8B62AC87 for ; Mon, 10 Dec 2018 19:10:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728471AbeLJTKo (ORCPT ); Mon, 10 Dec 2018 14:10:44 -0500 Received: from mail-wm1-f65.google.com ([209.85.128.65]:52366 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729258AbeLJTKn (ORCPT ); Mon, 10 Dec 2018 14:10:43 -0500 Received: by mail-wm1-f65.google.com with SMTP id r11-v6so12064759wmb.2 for ; Mon, 10 Dec 2018 11:10:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=n928obxUJ3xWZVdFM74FBK9dknOoSqVGpfEap5hPoKE=; b=CiTeoYZHVaxheUWwfSu69PCnqlKi8KKqX8V+MmDOBcDSCnjqSMsh+cQz8ZphNum8e8 sk+ax+e7csEfGdiipv53o4KcyQ8MR7I9o7KcFIefdKGMdCMeMbtM80gpqUqepiQ7u+Nf m1mQkZkRTkJe2ybpFjtBt0MWj4Bs7iwxJoSXKEP6oetpicmNSs0wNOqhNu07Q89Y7HfT rlivrGkxox8HpbYy3v3EXpvsSZV7IyCnt5l+DG2D/Z36jPLTZlrd2+qjKGfFcytgBpmK FIkehATyza57NkA69gAxIle/Y3XQADyh4HfHoyoYVrZb1N7KaDqsFiX+TjPsU91p0bT7 /oSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=n928obxUJ3xWZVdFM74FBK9dknOoSqVGpfEap5hPoKE=; b=uGpNtPYmFo187xV5pEPwBbP//3WwGQHHVKk0qt/pZunVhMMxEJETG3DgPoDBx+dTQj D07KBcMz+0iqmgmOWqI7us9lS5LLX+vvdaTpET3SONDlcpNuqkm0yB7oWBfalGmLqJRd Sc9SaSx7rTWBwSPWOKmlkx3FZXsjNKmbusSCxT9oO+Z1mAVhqYUFyWV+0HI+KiP3KTh+ aZRxfT20f0YNbpYO0YIBU44eHlt2mQKrvxKYzUecInc0XXjYAC5LuXDzAzHvwDCTVsmY Np0gVpw2txd6HTTYs7voLNhhm8a5tCGkPZ+UFjD7Okmpzlc9oYaqjuLAgSvg92NNtBIH KBxQ== X-Gm-Message-State: AA+aEWaF8CT83hKqy2KbeC4sZGUudha1Lsj8XgUJUDzEeOea+qy+L1sN Jk+xZsIYUYOS2jSOuqm52gkkGwN/ X-Google-Smtp-Source: AFSGD/UzC63Ca7I+RCN5EeAytnBMasTh1gCQifY6dgf6OR0pdtbs/bHAsHeiAqoFMjZhdWGGss276Q== X-Received: by 2002:a1c:47:: with SMTP id 68mr11094238wma.89.1544469040138; Mon, 10 Dec 2018 11:10:40 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.38 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:39 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v5 17/20] RDMA/vmw_pvrdma: Initialize ib_device_ops struct Date: Mon, 10 Dec 2018 21:09:45 +0200 Message-Id: <20181210190948.6892-18-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- .../infiniband/hw/vmw_pvrdma/pvrdma_main.c | 82 ++++++++++--------- 1 file changed, 45 insertions(+), 37 deletions(-) diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c index 398443f43dc3..eaa109dbc96a 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c @@ -161,6 +161,49 @@ static struct net_device *pvrdma_get_netdev(struct ib_device *ibdev, return netdev; } +static const struct ib_device_ops pvrdma_dev_ops = { + .add_gid = pvrdma_add_gid, + .alloc_mr = pvrdma_alloc_mr, + .alloc_pd = pvrdma_alloc_pd, + .alloc_ucontext = pvrdma_alloc_ucontext, + .create_ah = pvrdma_create_ah, + .create_cq = pvrdma_create_cq, + .create_qp = pvrdma_create_qp, + .dealloc_pd = pvrdma_dealloc_pd, + .dealloc_ucontext = pvrdma_dealloc_ucontext, + .del_gid = pvrdma_del_gid, + .dereg_mr = pvrdma_dereg_mr, + .destroy_ah = pvrdma_destroy_ah, + .destroy_cq = pvrdma_destroy_cq, + .destroy_qp = pvrdma_destroy_qp, + .get_dev_fw_str = pvrdma_get_fw_ver_str, + .get_dma_mr = pvrdma_get_dma_mr, + .get_link_layer = pvrdma_port_link_layer, + .get_netdev = pvrdma_get_netdev, + .get_port_immutable = pvrdma_port_immutable, + .map_mr_sg = pvrdma_map_mr_sg, + .mmap = pvrdma_mmap, + .modify_port = pvrdma_modify_port, + .modify_qp = pvrdma_modify_qp, + .poll_cq = pvrdma_poll_cq, + .post_recv = pvrdma_post_recv, + .post_send = pvrdma_post_send, + .query_device = pvrdma_query_device, + .query_gid = pvrdma_query_gid, + .query_pkey = pvrdma_query_pkey, + .query_port = pvrdma_query_port, + .query_qp = pvrdma_query_qp, + .reg_user_mr = pvrdma_reg_user_mr, + .req_notify_cq = pvrdma_req_notify_cq, +}; + +static const struct ib_device_ops pvrdma_dev_srq_ops = { + .create_srq = pvrdma_create_srq, + .destroy_srq = pvrdma_destroy_srq, + .modify_srq = pvrdma_modify_srq, + .query_srq = pvrdma_query_srq, +}; + static int pvrdma_register_device(struct pvrdma_dev *dev) { int ret = -1; @@ -197,39 +240,7 @@ static int pvrdma_register_device(struct pvrdma_dev *dev) dev->ib_dev.node_type = RDMA_NODE_IB_CA; dev->ib_dev.phys_port_cnt = dev->dsr->caps.phys_port_cnt; - dev->ib_dev.query_device = pvrdma_query_device; - dev->ib_dev.query_port = pvrdma_query_port; - dev->ib_dev.query_gid = pvrdma_query_gid; - dev->ib_dev.query_pkey = pvrdma_query_pkey; - dev->ib_dev.modify_port = pvrdma_modify_port; - dev->ib_dev.alloc_ucontext = pvrdma_alloc_ucontext; - dev->ib_dev.dealloc_ucontext = pvrdma_dealloc_ucontext; - dev->ib_dev.mmap = pvrdma_mmap; - dev->ib_dev.alloc_pd = pvrdma_alloc_pd; - dev->ib_dev.dealloc_pd = pvrdma_dealloc_pd; - dev->ib_dev.create_ah = pvrdma_create_ah; - dev->ib_dev.destroy_ah = pvrdma_destroy_ah; - dev->ib_dev.create_qp = pvrdma_create_qp; - dev->ib_dev.modify_qp = pvrdma_modify_qp; - dev->ib_dev.query_qp = pvrdma_query_qp; - dev->ib_dev.destroy_qp = pvrdma_destroy_qp; - dev->ib_dev.post_send = pvrdma_post_send; - dev->ib_dev.post_recv = pvrdma_post_recv; - dev->ib_dev.create_cq = pvrdma_create_cq; - dev->ib_dev.destroy_cq = pvrdma_destroy_cq; - dev->ib_dev.poll_cq = pvrdma_poll_cq; - dev->ib_dev.req_notify_cq = pvrdma_req_notify_cq; - dev->ib_dev.get_dma_mr = pvrdma_get_dma_mr; - dev->ib_dev.reg_user_mr = pvrdma_reg_user_mr; - dev->ib_dev.dereg_mr = pvrdma_dereg_mr; - dev->ib_dev.alloc_mr = pvrdma_alloc_mr; - dev->ib_dev.map_mr_sg = pvrdma_map_mr_sg; - dev->ib_dev.add_gid = pvrdma_add_gid; - dev->ib_dev.del_gid = pvrdma_del_gid; - dev->ib_dev.get_netdev = pvrdma_get_netdev; - dev->ib_dev.get_port_immutable = pvrdma_port_immutable; - dev->ib_dev.get_link_layer = pvrdma_port_link_layer; - dev->ib_dev.get_dev_fw_str = pvrdma_get_fw_ver_str; + ib_set_device_ops(&dev->ib_dev, &pvrdma_dev_ops); mutex_init(&dev->port_mutex); spin_lock_init(&dev->desc_lock); @@ -255,10 +266,7 @@ static int pvrdma_register_device(struct pvrdma_dev *dev) (1ull << IB_USER_VERBS_CMD_DESTROY_SRQ) | (1ull << IB_USER_VERBS_CMD_POST_SRQ_RECV); - dev->ib_dev.create_srq = pvrdma_create_srq; - dev->ib_dev.modify_srq = pvrdma_modify_srq; - dev->ib_dev.query_srq = pvrdma_query_srq; - dev->ib_dev.destroy_srq = pvrdma_destroy_srq; + ib_set_device_ops(&dev->ib_dev, &pvrdma_dev_srq_ops); dev->srq_tbl = kcalloc(dev->dsr->caps.max_srq, sizeof(struct pvrdma_srq *), From patchwork Mon Dec 10 19:09:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722281 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C38DF3E9D for ; Mon, 10 Dec 2018 19:10:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B35792A724 for ; Mon, 10 Dec 2018 19:10:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A7ED92AD4E; Mon, 10 Dec 2018 19:10:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 191B32A724 for ; Mon, 10 Dec 2018 19:10:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729284AbeLJTKo (ORCPT ); Mon, 10 Dec 2018 14:10:44 -0500 Received: from mail-wr1-f67.google.com ([209.85.221.67]:45981 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729272AbeLJTKn (ORCPT ); Mon, 10 Dec 2018 14:10:43 -0500 Received: by mail-wr1-f67.google.com with SMTP id b14so11629497wru.12 for ; Mon, 10 Dec 2018 11:10:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JbZdvWZYnQEtpF70pjpM3pwvlJJ/7ybNhWeg6SQXByc=; b=hFBqrQJDKxhi4t8ZYcHVjgufjgRFqFOU4x4hFgl3zjAalWxeSZ1Z3707HICZCfqwLm 9O43k3/NC3VdpRsYz7rdznoeuGEtihKh7u+Vm3gHiVQdS6SMrazsRbHuQLRcURf6aAP7 kAQzNl9bdodzqUhZ6QGwYXhXttJq6et+S9Xc64Z5pgkZuHuTmlkqmqS6+8EXqeyemzvz DtcJwvaUBoMcuX0UAAMr/HomjZBrUdi920O722Elme5hKijnVa010T0VwlUX2WwaDp35 yByK/UxRk2WaAXVYDELmjFOhHzkTHKItbU1df3/otG3AZMOvTS5zth+Dl0udGqwFqkyz XhGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JbZdvWZYnQEtpF70pjpM3pwvlJJ/7ybNhWeg6SQXByc=; b=XfXOg1b5B/Li7iHbRrov2pBmwYBM7WCLduvWLodbWJVXvNWpAGsWGk7kJfryBubUyM 5VTDvSc0M05f5NkjXAjrw56qr6QuaqEXbCW7DI3pZ/AgNcxUkTGFe4GU8s/ia4JPza5B UJIlW073fTzXEOoQ1aNueJVPwC4NXw8At70gkaacnS/zlzxw0kPrcv6i4m06DhHjxThg fi2/iixQKn+JVazGVgQooFqFPkyRb8WSA3qDdvHOJHU4J5UaOvV75aA1Tqh5Uuw4UU44 y4MO3y9cQcNigOj+kEExByI8sVvSA+vrNt9+xH0LUXTPO3oct4iHRi03eDTgCCLMS0B6 FvmQ== X-Gm-Message-State: AA+aEWb+lxNyATMY4DkKVP3bmmDsGLmmMZEAgLvVVeMxjVwAOgKImyef 9le5UGEQhYWK3nneyGB7W3s= X-Google-Smtp-Source: AFSGD/W8BcQj4dHtZJ31Iv41+d4VCzAxwq+GooOmhrnHT6Lp5MxthtCijbl2Jkpu28g3p4rSUJTSOQ== X-Received: by 2002:adf:d089:: with SMTP id y9mr11117034wrh.22.1544469041361; Mon, 10 Dec 2018 11:10:41 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.40 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:40 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v5 18/20] RDMA/rxe: Initialize ib_device_ops struct Date: Mon, 10 Dec 2018 21:09:46 +0200 Message-Id: <20181210190948.6892-19-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/sw/rxe/rxe_verbs.c | 90 ++++++++++++++------------- 1 file changed, 47 insertions(+), 43 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 30817c79ba96..036ee07f3fa2 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -1165,6 +1165,52 @@ static const struct attribute_group rxe_attr_group = { .attrs = rxe_dev_attributes, }; +static const struct ib_device_ops rxe_dev_ops = { + .alloc_hw_stats = rxe_ib_alloc_hw_stats, + .alloc_mr = rxe_alloc_mr, + .alloc_pd = rxe_alloc_pd, + .alloc_ucontext = rxe_alloc_ucontext, + .attach_mcast = rxe_attach_mcast, + .create_ah = rxe_create_ah, + .create_cq = rxe_create_cq, + .create_qp = rxe_create_qp, + .create_srq = rxe_create_srq, + .dealloc_pd = rxe_dealloc_pd, + .dealloc_ucontext = rxe_dealloc_ucontext, + .dereg_mr = rxe_dereg_mr, + .destroy_ah = rxe_destroy_ah, + .destroy_cq = rxe_destroy_cq, + .destroy_qp = rxe_destroy_qp, + .destroy_srq = rxe_destroy_srq, + .detach_mcast = rxe_detach_mcast, + .get_dma_mr = rxe_get_dma_mr, + .get_hw_stats = rxe_ib_get_hw_stats, + .get_link_layer = rxe_get_link_layer, + .get_netdev = rxe_get_netdev, + .get_port_immutable = rxe_port_immutable, + .map_mr_sg = rxe_map_mr_sg, + .mmap = rxe_mmap, + .modify_ah = rxe_modify_ah, + .modify_device = rxe_modify_device, + .modify_port = rxe_modify_port, + .modify_qp = rxe_modify_qp, + .modify_srq = rxe_modify_srq, + .peek_cq = rxe_peek_cq, + .poll_cq = rxe_poll_cq, + .post_recv = rxe_post_recv, + .post_send = rxe_post_send, + .post_srq_recv = rxe_post_srq_recv, + .query_ah = rxe_query_ah, + .query_device = rxe_query_device, + .query_pkey = rxe_query_pkey, + .query_port = rxe_query_port, + .query_qp = rxe_query_qp, + .query_srq = rxe_query_srq, + .reg_user_mr = rxe_reg_user_mr, + .req_notify_cq = rxe_req_notify_cq, + .resize_cq = rxe_resize_cq, +}; + int rxe_register_device(struct rxe_dev *rxe) { int err; @@ -1219,49 +1265,7 @@ int rxe_register_device(struct rxe_dev *rxe) | BIT_ULL(IB_USER_VERBS_CMD_DETACH_MCAST) ; - dev->query_device = rxe_query_device; - dev->modify_device = rxe_modify_device; - dev->query_port = rxe_query_port; - dev->modify_port = rxe_modify_port; - dev->get_link_layer = rxe_get_link_layer; - dev->get_netdev = rxe_get_netdev; - dev->query_pkey = rxe_query_pkey; - dev->alloc_ucontext = rxe_alloc_ucontext; - dev->dealloc_ucontext = rxe_dealloc_ucontext; - dev->mmap = rxe_mmap; - dev->get_port_immutable = rxe_port_immutable; - dev->alloc_pd = rxe_alloc_pd; - dev->dealloc_pd = rxe_dealloc_pd; - dev->create_ah = rxe_create_ah; - dev->modify_ah = rxe_modify_ah; - dev->query_ah = rxe_query_ah; - dev->destroy_ah = rxe_destroy_ah; - dev->create_srq = rxe_create_srq; - dev->modify_srq = rxe_modify_srq; - dev->query_srq = rxe_query_srq; - dev->destroy_srq = rxe_destroy_srq; - dev->post_srq_recv = rxe_post_srq_recv; - dev->create_qp = rxe_create_qp; - dev->modify_qp = rxe_modify_qp; - dev->query_qp = rxe_query_qp; - dev->destroy_qp = rxe_destroy_qp; - dev->post_send = rxe_post_send; - dev->post_recv = rxe_post_recv; - dev->create_cq = rxe_create_cq; - dev->destroy_cq = rxe_destroy_cq; - dev->resize_cq = rxe_resize_cq; - dev->poll_cq = rxe_poll_cq; - dev->peek_cq = rxe_peek_cq; - dev->req_notify_cq = rxe_req_notify_cq; - dev->get_dma_mr = rxe_get_dma_mr; - dev->reg_user_mr = rxe_reg_user_mr; - dev->dereg_mr = rxe_dereg_mr; - dev->alloc_mr = rxe_alloc_mr; - dev->map_mr_sg = rxe_map_mr_sg; - dev->attach_mcast = rxe_attach_mcast; - dev->detach_mcast = rxe_detach_mcast; - dev->get_hw_stats = rxe_ib_get_hw_stats; - dev->alloc_hw_stats = rxe_ib_alloc_hw_stats; + ib_set_device_ops(dev, &rxe_dev_ops); tfm = crypto_alloc_shash("crc32", 0, 0); if (IS_ERR(tfm)) { From patchwork Mon Dec 10 19:09:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722283 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8A82018A7 for ; Mon, 10 Dec 2018 19:10:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7A4D62A7B4 for ; Mon, 10 Dec 2018 19:10:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6EC972AC62; Mon, 10 Dec 2018 19:10:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A68962A9A1 for ; Mon, 10 Dec 2018 19:10:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727246AbeLJTKr (ORCPT ); Mon, 10 Dec 2018 14:10:47 -0500 Received: from mail-wm1-f68.google.com ([209.85.128.68]:35267 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729272AbeLJTKq (ORCPT ); Mon, 10 Dec 2018 14:10:46 -0500 Received: by mail-wm1-f68.google.com with SMTP id c126so12449679wmh.0 for ; Mon, 10 Dec 2018 11:10:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YoghzjjQ7EuX1dsXREu/Gt/RX3A8vaPHrGguTmAFvVc=; b=Y7snyQkth2VH6iXl0ae6MzbHQznZepVGBWyN0X8QbWlvYZXkUZZboCMapwniLmBo+0 ns0FIFVbceawb2EJag05wL0e/KRrJginvEuG+qmDk5a0JS53auj5o60iK/7UWsQpTRsT I8inaX+UPQLwzNyDZMCOze+6qEoRXx8ydNxJ3R/eMakFQkSO1rOg4KctRKO7vRNZS9YN FC4p/MHyYLCxF+HSP9EAbaSds/GavX3q6NVCCZSxu7OHXj9z842TjX7d081UHNWfUERT ySfg9lw5gsZfjUMT1gLuPP1ysymbBHNkR0KyMmOEbbQo/SUWO0TPuo1ifbJanlZHr011 z60Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YoghzjjQ7EuX1dsXREu/Gt/RX3A8vaPHrGguTmAFvVc=; b=Xdz6e+V+YTcHOc579dSF/v6mv3TtGXNiObwup0wJZG8OBYn2GcUAqY77H6/9t9hfEb m3A+EpIPQVyzmkjfYOpHT6mXG6Sp4lNI6xW3mftDPFnd30TG6OUYH6GXswPveWj/OHVG eQNXEzAUzXkGSK56ePy9oLGAGJ09XssPM5wHGUENmD3pByih5073eAE32AEqu4G7xCF0 KWaEgFhcSjixu1xM2Pu94ph2CtC71KY1X07fpmIh6yBJjTG4eMZCSlI1t//V8c6sHUdd dT2DLpyFZig7sJM4D3KWHlSvqXgARcwPRtEO4sRghI1zuSSGfmu0nIb185VERT6U9Pf8 5kxg== X-Gm-Message-State: AA+aEWaWcujYUCB457Y7lv3NF3xXQYhrDSbrImnuSEwsySTWe58oJ3P/ 9H3b2Gete0T0VSa+7SKQ15W35aR/ X-Google-Smtp-Source: AFSGD/U629KP6iM1u0Cw8T8FopXAhsLqgfOWcvx1khCDKSmxHGwfINHp0P7K4ThRWTXpH/e4MwVNmQ== X-Received: by 2002:a7b:c852:: with SMTP id c18mr11128180wml.49.1544469042607; Mon, 10 Dec 2018 11:10:42 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.41 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:42 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v5 19/20] RDMA/rdmavt: Initialize ib_device_ops struct Date: Mon, 10 Dec 2018 21:09:47 +0200 Message-Id: <20181210190948.6892-20-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops() and remove the use of check_driver_override(). Signed-off-by: Kamal Heib --- drivers/infiniband/sw/rdmavt/vt.c | 299 ++++++------------------------ 1 file changed, 54 insertions(+), 245 deletions(-) diff --git a/drivers/infiniband/sw/rdmavt/vt.c b/drivers/infiniband/sw/rdmavt/vt.c index 723d3daf2eba..c52b38fe2416 100644 --- a/drivers/infiniband/sw/rdmavt/vt.c +++ b/drivers/infiniband/sw/rdmavt/vt.c @@ -392,16 +392,51 @@ enum { _VERB_IDX_MAX /* Must always be last! */ }; -static inline int check_driver_override(struct rvt_dev_info *rdi, - size_t offset, void *func) -{ - if (!*(void **)((void *)&rdi->ibdev + offset)) { - *(void **)((void *)&rdi->ibdev + offset) = func; - return 0; - } - - return 1; -} +static const struct ib_device_ops rvt_dev_ops = { + .alloc_fmr = rvt_alloc_fmr, + .alloc_mr = rvt_alloc_mr, + .alloc_pd = rvt_alloc_pd, + .alloc_ucontext = rvt_alloc_ucontext, + .attach_mcast = rvt_attach_mcast, + .create_ah = rvt_create_ah, + .create_cq = rvt_create_cq, + .create_qp = rvt_create_qp, + .create_srq = rvt_create_srq, + .dealloc_fmr = rvt_dealloc_fmr, + .dealloc_pd = rvt_dealloc_pd, + .dealloc_ucontext = rvt_dealloc_ucontext, + .dereg_mr = rvt_dereg_mr, + .destroy_ah = rvt_destroy_ah, + .destroy_cq = rvt_destroy_cq, + .destroy_qp = rvt_destroy_qp, + .destroy_srq = rvt_destroy_srq, + .detach_mcast = rvt_detach_mcast, + .get_dma_mr = rvt_get_dma_mr, + .get_port_immutable = rvt_get_port_immutable, + .map_mr_sg = rvt_map_mr_sg, + .map_phys_fmr = rvt_map_phys_fmr, + .mmap = rvt_mmap, + .modify_ah = rvt_modify_ah, + .modify_device = rvt_modify_device, + .modify_port = rvt_modify_port, + .modify_qp = rvt_modify_qp, + .modify_srq = rvt_modify_srq, + .poll_cq = rvt_poll_cq, + .post_recv = rvt_post_recv, + .post_send = rvt_post_send, + .post_srq_recv = rvt_post_srq_recv, + .query_ah = rvt_query_ah, + .query_device = rvt_query_device, + .query_gid = rvt_query_gid, + .query_pkey = rvt_query_pkey, + .query_port = rvt_query_port, + .query_qp = rvt_query_qp, + .query_srq = rvt_query_srq, + .reg_user_mr = rvt_reg_user_mr, + .req_notify_cq = rvt_req_notify_cq, + .resize_cq = rvt_resize_cq, + .unmap_fmr = rvt_unmap_fmr, +}; static noinline int check_support(struct rvt_dev_info *rdi, int verb) { @@ -416,76 +451,36 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) return -EINVAL; break; - case QUERY_DEVICE: - check_driver_override(rdi, offsetof(struct ib_device, - query_device), - rvt_query_device); - break; - case MODIFY_DEVICE: /* * rdmavt does not support modify device currently drivers must * provide. */ - if (!check_driver_override(rdi, offsetof(struct ib_device, - modify_device), - rvt_modify_device)) + if (!rdi->ibdev.modify_device) return -EOPNOTSUPP; break; case QUERY_PORT: - if (!check_driver_override(rdi, offsetof(struct ib_device, - query_port), - rvt_query_port)) + if (!rdi->ibdev.query_port) if (!rdi->driver_f.query_port_state) return -EINVAL; break; case MODIFY_PORT: - if (!check_driver_override(rdi, offsetof(struct ib_device, - modify_port), - rvt_modify_port)) + if (!rdi->ibdev.modify_port) if (!rdi->driver_f.cap_mask_chg || !rdi->driver_f.shut_down_port) return -EINVAL; break; - case QUERY_PKEY: - check_driver_override(rdi, offsetof(struct ib_device, - query_pkey), - rvt_query_pkey); - break; - case QUERY_GID: - if (!check_driver_override(rdi, offsetof(struct ib_device, - query_gid), - rvt_query_gid)) + if (!rdi->ibdev.query_gid) if (!rdi->driver_f.get_guid_be) return -EINVAL; break; - case ALLOC_UCONTEXT: - check_driver_override(rdi, offsetof(struct ib_device, - alloc_ucontext), - rvt_alloc_ucontext); - break; - - case DEALLOC_UCONTEXT: - check_driver_override(rdi, offsetof(struct ib_device, - dealloc_ucontext), - rvt_dealloc_ucontext); - break; - - case GET_PORT_IMMUTABLE: - check_driver_override(rdi, offsetof(struct ib_device, - get_port_immutable), - rvt_get_port_immutable); - break; - case CREATE_QP: - if (!check_driver_override(rdi, offsetof(struct ib_device, - create_qp), - rvt_create_qp)) + if (!rdi->ibdev.create_qp) if (!rdi->driver_f.qp_priv_alloc || !rdi->driver_f.qp_priv_free || !rdi->driver_f.notify_qp_reset || @@ -496,9 +491,7 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) break; case MODIFY_QP: - if (!check_driver_override(rdi, offsetof(struct ib_device, - modify_qp), - rvt_modify_qp)) + if (!rdi->ibdev.modify_qp) if (!rdi->driver_f.notify_qp_reset || !rdi->driver_f.schedule_send || !rdi->driver_f.get_pmtu_from_attr || @@ -512,9 +505,7 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) break; case DESTROY_QP: - if (!check_driver_override(rdi, offsetof(struct ib_device, - destroy_qp), - rvt_destroy_qp)) + if (!rdi->ibdev.destroy_qp) if (!rdi->driver_f.qp_priv_free || !rdi->driver_f.notify_qp_reset || !rdi->driver_f.flush_qp_waiters || @@ -523,197 +514,14 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) return -EINVAL; break; - case QUERY_QP: - check_driver_override(rdi, offsetof(struct ib_device, - query_qp), - rvt_query_qp); - break; - case POST_SEND: - if (!check_driver_override(rdi, offsetof(struct ib_device, - post_send), - rvt_post_send)) + if (!rdi->ibdev.post_send) if (!rdi->driver_f.schedule_send || !rdi->driver_f.do_send || !rdi->post_parms) return -EINVAL; break; - case POST_RECV: - check_driver_override(rdi, offsetof(struct ib_device, - post_recv), - rvt_post_recv); - break; - case POST_SRQ_RECV: - check_driver_override(rdi, offsetof(struct ib_device, - post_srq_recv), - rvt_post_srq_recv); - break; - - case CREATE_AH: - check_driver_override(rdi, offsetof(struct ib_device, - create_ah), - rvt_create_ah); - break; - - case DESTROY_AH: - check_driver_override(rdi, offsetof(struct ib_device, - destroy_ah), - rvt_destroy_ah); - break; - - case MODIFY_AH: - check_driver_override(rdi, offsetof(struct ib_device, - modify_ah), - rvt_modify_ah); - break; - - case QUERY_AH: - check_driver_override(rdi, offsetof(struct ib_device, - query_ah), - rvt_query_ah); - break; - - case CREATE_SRQ: - check_driver_override(rdi, offsetof(struct ib_device, - create_srq), - rvt_create_srq); - break; - - case MODIFY_SRQ: - check_driver_override(rdi, offsetof(struct ib_device, - modify_srq), - rvt_modify_srq); - break; - - case DESTROY_SRQ: - check_driver_override(rdi, offsetof(struct ib_device, - destroy_srq), - rvt_destroy_srq); - break; - - case QUERY_SRQ: - check_driver_override(rdi, offsetof(struct ib_device, - query_srq), - rvt_query_srq); - break; - - case ATTACH_MCAST: - check_driver_override(rdi, offsetof(struct ib_device, - attach_mcast), - rvt_attach_mcast); - break; - - case DETACH_MCAST: - check_driver_override(rdi, offsetof(struct ib_device, - detach_mcast), - rvt_detach_mcast); - break; - - case GET_DMA_MR: - check_driver_override(rdi, offsetof(struct ib_device, - get_dma_mr), - rvt_get_dma_mr); - break; - - case REG_USER_MR: - check_driver_override(rdi, offsetof(struct ib_device, - reg_user_mr), - rvt_reg_user_mr); - break; - - case DEREG_MR: - check_driver_override(rdi, offsetof(struct ib_device, - dereg_mr), - rvt_dereg_mr); - break; - - case ALLOC_FMR: - check_driver_override(rdi, offsetof(struct ib_device, - alloc_fmr), - rvt_alloc_fmr); - break; - - case ALLOC_MR: - check_driver_override(rdi, offsetof(struct ib_device, - alloc_mr), - rvt_alloc_mr); - break; - - case MAP_MR_SG: - check_driver_override(rdi, offsetof(struct ib_device, - map_mr_sg), - rvt_map_mr_sg); - break; - - case MAP_PHYS_FMR: - check_driver_override(rdi, offsetof(struct ib_device, - map_phys_fmr), - rvt_map_phys_fmr); - break; - - case UNMAP_FMR: - check_driver_override(rdi, offsetof(struct ib_device, - unmap_fmr), - rvt_unmap_fmr); - break; - - case DEALLOC_FMR: - check_driver_override(rdi, offsetof(struct ib_device, - dealloc_fmr), - rvt_dealloc_fmr); - break; - - case MMAP: - check_driver_override(rdi, offsetof(struct ib_device, - mmap), - rvt_mmap); - break; - - case CREATE_CQ: - check_driver_override(rdi, offsetof(struct ib_device, - create_cq), - rvt_create_cq); - break; - - case DESTROY_CQ: - check_driver_override(rdi, offsetof(struct ib_device, - destroy_cq), - rvt_destroy_cq); - break; - - case POLL_CQ: - check_driver_override(rdi, offsetof(struct ib_device, - poll_cq), - rvt_poll_cq); - break; - - case REQ_NOTFIY_CQ: - check_driver_override(rdi, offsetof(struct ib_device, - req_notify_cq), - rvt_req_notify_cq); - break; - - case RESIZE_CQ: - check_driver_override(rdi, offsetof(struct ib_device, - resize_cq), - rvt_resize_cq); - break; - - case ALLOC_PD: - check_driver_override(rdi, offsetof(struct ib_device, - alloc_pd), - rvt_alloc_pd); - break; - - case DEALLOC_PD: - check_driver_override(rdi, offsetof(struct ib_device, - dealloc_pd), - rvt_dealloc_pd); - break; - - default: - return -EINVAL; } return 0; @@ -745,6 +553,7 @@ int rvt_register_device(struct rvt_dev_info *rdi, u32 driver_id) return -EINVAL; } + ib_set_device_ops(&rdi->ibdev, &rvt_dev_ops); /* Once we get past here we can use rvt_pr macros and tracepoints */ trace_rvt_dbg(rdi, "Driver attempting registration"); From patchwork Mon Dec 10 19:09:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10722289 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 070A06C5 for ; Mon, 10 Dec 2018 19:10:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E0D7B2A795 for ; Mon, 10 Dec 2018 19:10:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D31DF2A95B; Mon, 10 Dec 2018 19:10:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 78F552A724 for ; Mon, 10 Dec 2018 19:10:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728836AbeLJTKw (ORCPT ); Mon, 10 Dec 2018 14:10:52 -0500 Received: from mail-wr1-f66.google.com ([209.85.221.66]:39429 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728374AbeLJTKv (ORCPT ); Mon, 10 Dec 2018 14:10:51 -0500 Received: by mail-wr1-f66.google.com with SMTP id t27so11635458wra.6 for ; Mon, 10 Dec 2018 11:10:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7+DC4xpM68FVDIx0TW7xqPuuoKQPlIk8/ml7QI3zX3c=; b=Cl/aQOa0Kl+8rOTvirAzIhRIHWEXU7Owh5zVnebyoLi2bq+a+b4D4iG/PJ9Iuhx53M Wvje/NpNlCjWVynRrBaWBI2EP3PZvvqsJi+727zs5DRxPFW2a34yBkuXgXrEZwNG3JyL Qb0iLBU0Vxjbx9IugQE0qcaR2kcA4+a9juW/UmRvi5eBCM4bjnb8ntirE/kwCfIcNfz9 a6WXaagi5v2z5Mc8/vPgniZ4y1sRyKYNaIqCJzuIE+9ws5MeQLZ4Mb4Je+v/BYefsffE FlIFmF9dGGnQCiRAUDQdHpkFuueOxPYaeZso0eLKc2WWmRD6f83n8sBqIXPcpxtg1KRx JRng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7+DC4xpM68FVDIx0TW7xqPuuoKQPlIk8/ml7QI3zX3c=; b=WMe5bfqSEjEecqkpne5VU8PtmEDKktcWPjEgR/za5IIaPReeVji7pqYP5jiImE3jvJ m5H+quozZqMLmDrZE3hT640bmaJ/uoJJen7tNvrACH1Os2VPuhK6vF4/h05TMwcuj+7T Ahctc3eOyfkWcQuDe7ETcS3uEjwR4xn7IOju7YiMEfbu8AFZTrO4VrgW5yqMdCWve6kS frjt+Wvx3zZ9ZITZV9EWpclsWlZ6U9ZMYUHh0z2DVWjmrKksaNVfEIOS6ft1M+od94qB kQJ1wvbFw6jUFUQ2vxnoRDYKBRT20SeNY6cpFJoxj4XxWx2bRHEQR8WfL0jrwnuYx+Ct lyZQ== X-Gm-Message-State: AA+aEWYBkiNIvXf3CvJ8jTlHtz63+cdh5P61Fl9DqZ6Cpxj9v4pdLL7N TI6cBDXlmAUfaaCeIJwRbA0GWxaH X-Google-Smtp-Source: AFSGD/VMV6j0u2BSVERFI2ivkEmXOdMrw7qFf70nKdg4YbGabjzeJ/dR6Bxje0tAeuDyU5K5X+KQSQ== X-Received: by 2002:adf:8b83:: with SMTP id o3mr11511442wra.81.1544469044323; Mon, 10 Dec 2018 11:10:44 -0800 (PST) Received: from kheib-workstation.redhat.com (bzq-109-64-38-215.red.bezeqint.net. [109.64.38.215]) by smtp.gmail.com with ESMTPSA id o5sm33847657wmg.25.2018.12.10.11.10.42 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Dec 2018 11:10:43 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v5 20/20] RDMA: Start use ib_device_ops Date: Mon, 10 Dec 2018 21:09:48 +0200 Message-Id: <20181210190948.6892-21-kamalheib1@gmail.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181210190948.6892-1-kamalheib1@gmail.com> References: <20181210190948.6892-1-kamalheib1@gmail.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Make all the required change to start use the ib_device_ops structure. Signed-off-by: Kamal Heib --- drivers/infiniband/core/cache.c | 12 +- drivers/infiniband/core/core_priv.h | 12 +- drivers/infiniband/core/cq.c | 6 +- drivers/infiniband/core/device.c | 211 ++++++------ drivers/infiniband/core/fmr_pool.c | 4 +- drivers/infiniband/core/mad.c | 24 +- drivers/infiniband/core/nldev.c | 4 +- drivers/infiniband/core/opa_smi.h | 4 +- drivers/infiniband/core/rdma_core.c | 6 +- drivers/infiniband/core/security.c | 8 +- drivers/infiniband/core/smi.h | 4 +- drivers/infiniband/core/sysfs.c | 28 +- drivers/infiniband/core/ucm.c | 2 +- drivers/infiniband/core/uverbs_cmd.c | 58 ++-- drivers/infiniband/core/uverbs_main.c | 14 +- drivers/infiniband/core/uverbs_std_types.c | 2 +- .../core/uverbs_std_types_counters.c | 10 +- drivers/infiniband/core/uverbs_std_types_cq.c | 6 +- drivers/infiniband/core/uverbs_std_types_dm.c | 6 +- .../core/uverbs_std_types_flow_action.c | 14 +- drivers/infiniband/core/uverbs_std_types_mr.c | 4 +- drivers/infiniband/core/verbs.c | 159 ++++----- drivers/infiniband/hw/i40iw/i40iw_cm.c | 2 +- drivers/infiniband/hw/mlx4/alias_GUID.c | 2 +- drivers/infiniband/hw/mlx5/main.c | 2 +- drivers/infiniband/hw/nes/nes_cm.c | 2 +- drivers/infiniband/sw/rdmavt/vt.c | 16 +- drivers/infiniband/ulp/ipoib/ipoib_main.c | 4 +- drivers/infiniband/ulp/iser/iser_memory.c | 4 +- .../infiniband/ulp/opa_vnic/opa_vnic_netdev.c | 8 +- drivers/infiniband/ulp/srp/ib_srp.c | 6 +- fs/cifs/smbdirect.c | 2 +- include/rdma/ib_verbs.h | 303 +----------------- include/rdma/uverbs_ioctl.h | 12 +- net/rds/ib.c | 4 +- net/sunrpc/xprtrdma/fmr_ops.c | 2 +- 36 files changed, 351 insertions(+), 616 deletions(-) diff --git a/drivers/infiniband/core/cache.c b/drivers/infiniband/core/cache.c index 5b2fce4a7091..22e20ed5a393 100644 --- a/drivers/infiniband/core/cache.c +++ b/drivers/infiniband/core/cache.c @@ -217,7 +217,7 @@ static void free_gid_entry_locked(struct ib_gid_table_entry *entry) if (rdma_cap_roce_gid_table(device, port_num) && entry->state != GID_TABLE_ENTRY_INVALID) - device->del_gid(&entry->attr, &entry->context); + device->ops.del_gid(&entry->attr, &entry->context); write_lock_irq(&table->rwlock); @@ -324,7 +324,7 @@ static int add_roce_gid(struct ib_gid_table_entry *entry) return -EINVAL; } if (rdma_cap_roce_gid_table(attr->device, attr->port_num)) { - ret = attr->device->add_gid(attr, &entry->context); + ret = attr->device->ops.add_gid(attr, &entry->context); if (ret) { dev_err(&attr->device->dev, "%s GID add failed port=%d index=%d\n", @@ -548,8 +548,8 @@ int ib_cache_gid_add(struct ib_device *ib_dev, u8 port, unsigned long mask; int ret; - if (ib_dev->get_netdev) { - idev = ib_dev->get_netdev(ib_dev, port); + if (ib_dev->ops.get_netdev) { + idev = ib_dev->ops.get_netdev(ib_dev, port); if (idev && attr->ndev != idev) { union ib_gid default_gid; @@ -1296,9 +1296,9 @@ static int config_non_roce_gid_cache(struct ib_device *device, mutex_lock(&table->lock); for (i = 0; i < gid_tbl_len; ++i) { - if (!device->query_gid) + if (!device->ops.query_gid) continue; - ret = device->query_gid(device, port, i, &gid_attr.gid); + ret = device->ops.query_gid(device, port, i, &gid_attr.gid); if (ret) { dev_warn(&device->dev, "query_gid failed (%d) for index %d\n", ret, diff --git a/drivers/infiniband/core/core_priv.h b/drivers/infiniband/core/core_priv.h index cc7535c5e192..cea92624f9d4 100644 --- a/drivers/infiniband/core/core_priv.h +++ b/drivers/infiniband/core/core_priv.h @@ -215,10 +215,10 @@ static inline int ib_security_modify_qp(struct ib_qp *qp, int qp_attr_mask, struct ib_udata *udata) { - return qp->device->modify_qp(qp->real_qp, - qp_attr, - qp_attr_mask, - udata); + return qp->device->ops.modify_qp(qp->real_qp, + qp_attr, + qp_attr_mask, + udata); } static inline int ib_create_qp_security(struct ib_qp *qp, @@ -280,10 +280,10 @@ static inline struct ib_qp *_ib_create_qp(struct ib_device *dev, { struct ib_qp *qp; - if (!dev->create_qp) + if (!dev->ops.create_qp) return ERR_PTR(-EOPNOTSUPP); - qp = dev->create_qp(pd, attr, udata); + qp = dev->ops.create_qp(pd, attr, udata); if (IS_ERR(qp)) return qp; diff --git a/drivers/infiniband/core/cq.c b/drivers/infiniband/core/cq.c index b1e5365ddafa..7fb4f64ae933 100644 --- a/drivers/infiniband/core/cq.c +++ b/drivers/infiniband/core/cq.c @@ -145,7 +145,7 @@ struct ib_cq *__ib_alloc_cq(struct ib_device *dev, void *private, struct ib_cq *cq; int ret = -ENOMEM; - cq = dev->create_cq(dev, &cq_attr, NULL, NULL); + cq = dev->ops.create_cq(dev, &cq_attr, NULL, NULL); if (IS_ERR(cq)) return cq; @@ -193,7 +193,7 @@ struct ib_cq *__ib_alloc_cq(struct ib_device *dev, void *private, kfree(cq->wc); rdma_restrack_del(&cq->res); out_destroy_cq: - cq->device->destroy_cq(cq); + cq->device->ops.destroy_cq(cq); return ERR_PTR(ret); } EXPORT_SYMBOL(__ib_alloc_cq); @@ -225,7 +225,7 @@ void ib_free_cq(struct ib_cq *cq) kfree(cq->wc); rdma_restrack_del(&cq->res); - ret = cq->device->destroy_cq(cq); + ret = cq->device->ops.destroy_cq(cq); WARN_ON_ONCE(ret); } EXPORT_SYMBOL(ib_free_cq); diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c index 3589894c46b8..a44f2d2c37cf 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c @@ -96,7 +96,7 @@ static struct notifier_block ibdev_lsm_nb = { static int ib_device_check_mandatory(struct ib_device *device) { -#define IB_MANDATORY_FUNC(x) { offsetof(struct ib_device, x), #x } +#define IB_MANDATORY_FUNC(x) { offsetof(struct ib_device_ops, x), #x } static const struct { size_t offset; char *name; @@ -122,7 +122,8 @@ static int ib_device_check_mandatory(struct ib_device *device) int i; for (i = 0; i < ARRAY_SIZE(mandatory_table); ++i) { - if (!*(void **) ((void *) device + mandatory_table[i].offset)) { + if (!*(void **) ((void *) &device->ops + + mandatory_table[i].offset)) { dev_warn(&device->dev, "Device is missing mandatory function %s\n", mandatory_table[i].name); @@ -373,8 +374,8 @@ static int read_port_immutable(struct ib_device *device) return -ENOMEM; for (port = start_port; port <= end_port; ++port) { - ret = device->get_port_immutable(device, port, - &device->port_immutable[port]); + ret = device->ops.get_port_immutable(device, port, + &device->port_immutable[port]); if (ret) return ret; @@ -386,8 +387,8 @@ static int read_port_immutable(struct ib_device *device) void ib_get_device_fw_str(struct ib_device *dev, char *str) { - if (dev->get_dev_fw_str) - dev->get_dev_fw_str(dev, str); + if (dev->ops.get_dev_fw_str) + dev->ops.get_dev_fw_str(dev, str); else str[0] = '\0'; } @@ -536,7 +537,7 @@ static int setup_device(struct ib_device *device) } memset(&device->attrs, 0, sizeof(device->attrs)); - ret = device->query_device(device, &device->attrs, &uhw); + ret = device->ops.query_device(device, &device->attrs, &uhw); if (ret) { dev_warn(&device->dev, "Couldn't query the device attributes\n"); @@ -923,14 +924,14 @@ int ib_query_port(struct ib_device *device, return -EINVAL; memset(port_attr, 0, sizeof(*port_attr)); - err = device->query_port(device, port_num, port_attr); + err = device->ops.query_port(device, port_num, port_attr); if (err || port_attr->subnet_prefix) return err; if (rdma_port_get_link_layer(device, port_num) != IB_LINK_LAYER_INFINIBAND) return 0; - err = device->query_gid(device, port_num, 0, &gid); + err = device->ops.query_gid(device, port_num, 0, &gid); if (err) return err; @@ -964,8 +965,8 @@ void ib_enum_roce_netdev(struct ib_device *ib_dev, if (rdma_protocol_roce(ib_dev, port)) { struct net_device *idev = NULL; - if (ib_dev->get_netdev) - idev = ib_dev->get_netdev(ib_dev, port); + if (ib_dev->ops.get_netdev) + idev = ib_dev->ops.get_netdev(ib_dev, port); if (idev && idev->reg_state >= NETREG_UNREGISTERED) { @@ -1042,7 +1043,7 @@ int ib_enum_all_devs(nldev_callback nldev_cb, struct sk_buff *skb, int ib_query_pkey(struct ib_device *device, u8 port_num, u16 index, u16 *pkey) { - return device->query_pkey(device, port_num, index, pkey); + return device->ops.query_pkey(device, port_num, index, pkey); } EXPORT_SYMBOL(ib_query_pkey); @@ -1059,11 +1060,11 @@ int ib_modify_device(struct ib_device *device, int device_modify_mask, struct ib_device_modify *device_modify) { - if (!device->modify_device) + if (!device->ops.modify_device) return -ENOSYS; - return device->modify_device(device, device_modify_mask, - device_modify); + return device->ops.modify_device(device, device_modify_mask, + device_modify); } EXPORT_SYMBOL(ib_modify_device); @@ -1087,9 +1088,10 @@ int ib_modify_port(struct ib_device *device, if (!rdma_is_port_valid(device, port_num)) return -EINVAL; - if (device->modify_port) - rc = device->modify_port(device, port_num, port_modify_mask, - port_modify); + if (device->ops.modify_port) + rc = device->ops.modify_port(device, port_num, + port_modify_mask, + port_modify); else rc = rdma_protocol_roce(device, port_num) ? 0 : -ENOSYS; return rc; @@ -1218,6 +1220,7 @@ EXPORT_SYMBOL(ib_get_net_dev_by_params); void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops) { + struct ib_device_ops *dev_ops = &dev->ops; #define SET_DEVICE_OP(ptr, name) \ do { \ if (ops->name) \ @@ -1225,92 +1228,92 @@ void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops) (ptr)->name = ops->name; \ } while (0) - SET_DEVICE_OP(dev, add_gid); - SET_DEVICE_OP(dev, alloc_dm); - SET_DEVICE_OP(dev, alloc_fmr); - SET_DEVICE_OP(dev, alloc_hw_stats); - SET_DEVICE_OP(dev, alloc_mr); - SET_DEVICE_OP(dev, alloc_mw); - SET_DEVICE_OP(dev, alloc_pd); - SET_DEVICE_OP(dev, alloc_rdma_netdev); - SET_DEVICE_OP(dev, alloc_ucontext); - SET_DEVICE_OP(dev, alloc_xrcd); - SET_DEVICE_OP(dev, attach_mcast); - SET_DEVICE_OP(dev, check_mr_status); - SET_DEVICE_OP(dev, create_ah); - SET_DEVICE_OP(dev, create_counters); - SET_DEVICE_OP(dev, create_cq); - SET_DEVICE_OP(dev, create_flow); - SET_DEVICE_OP(dev, create_flow_action_esp); - SET_DEVICE_OP(dev, create_qp); - SET_DEVICE_OP(dev, create_rwq_ind_table); - SET_DEVICE_OP(dev, create_srq); - SET_DEVICE_OP(dev, create_wq); - SET_DEVICE_OP(dev, dealloc_dm); - SET_DEVICE_OP(dev, dealloc_fmr); - SET_DEVICE_OP(dev, dealloc_mw); - SET_DEVICE_OP(dev, dealloc_pd); - SET_DEVICE_OP(dev, dealloc_ucontext); - SET_DEVICE_OP(dev, dealloc_xrcd); - SET_DEVICE_OP(dev, del_gid); - SET_DEVICE_OP(dev, dereg_mr); - SET_DEVICE_OP(dev, destroy_ah); - SET_DEVICE_OP(dev, destroy_counters); - SET_DEVICE_OP(dev, destroy_cq); - SET_DEVICE_OP(dev, destroy_flow); - SET_DEVICE_OP(dev, destroy_flow_action); - SET_DEVICE_OP(dev, destroy_qp); - SET_DEVICE_OP(dev, destroy_rwq_ind_table); - SET_DEVICE_OP(dev, destroy_srq); - SET_DEVICE_OP(dev, destroy_wq); - SET_DEVICE_OP(dev, detach_mcast); - SET_DEVICE_OP(dev, disassociate_ucontext); - SET_DEVICE_OP(dev, drain_rq); - SET_DEVICE_OP(dev, drain_sq); - SET_DEVICE_OP(dev, get_dev_fw_str); - SET_DEVICE_OP(dev, get_dma_mr); - SET_DEVICE_OP(dev, get_hw_stats); - SET_DEVICE_OP(dev, get_link_layer); - SET_DEVICE_OP(dev, get_netdev); - SET_DEVICE_OP(dev, get_port_immutable); - SET_DEVICE_OP(dev, get_vector_affinity); - SET_DEVICE_OP(dev, get_vf_config); - SET_DEVICE_OP(dev, get_vf_stats); - SET_DEVICE_OP(dev, map_mr_sg); - SET_DEVICE_OP(dev, map_phys_fmr); - SET_DEVICE_OP(dev, mmap); - SET_DEVICE_OP(dev, modify_ah); - SET_DEVICE_OP(dev, modify_cq); - SET_DEVICE_OP(dev, modify_device); - SET_DEVICE_OP(dev, modify_flow_action_esp); - SET_DEVICE_OP(dev, modify_port); - SET_DEVICE_OP(dev, modify_qp); - SET_DEVICE_OP(dev, modify_srq); - SET_DEVICE_OP(dev, modify_wq); - SET_DEVICE_OP(dev, peek_cq); - SET_DEVICE_OP(dev, poll_cq); - SET_DEVICE_OP(dev, post_recv); - SET_DEVICE_OP(dev, post_send); - SET_DEVICE_OP(dev, post_srq_recv); - SET_DEVICE_OP(dev, process_mad); - SET_DEVICE_OP(dev, query_ah); - SET_DEVICE_OP(dev, query_device); - SET_DEVICE_OP(dev, query_gid); - SET_DEVICE_OP(dev, query_pkey); - SET_DEVICE_OP(dev, query_port); - SET_DEVICE_OP(dev, query_qp); - SET_DEVICE_OP(dev, query_srq); - SET_DEVICE_OP(dev, rdma_netdev_get_params); - SET_DEVICE_OP(dev, read_counters); - SET_DEVICE_OP(dev, reg_dm_mr); - SET_DEVICE_OP(dev, reg_user_mr); - SET_DEVICE_OP(dev, req_ncomp_notif); - SET_DEVICE_OP(dev, req_notify_cq); - SET_DEVICE_OP(dev, rereg_user_mr); - SET_DEVICE_OP(dev, resize_cq); - SET_DEVICE_OP(dev, set_vf_guid); - SET_DEVICE_OP(dev, set_vf_link_state); - SET_DEVICE_OP(dev, unmap_fmr); + SET_DEVICE_OP(dev_ops, add_gid); + SET_DEVICE_OP(dev_ops, alloc_dm); + SET_DEVICE_OP(dev_ops, alloc_fmr); + SET_DEVICE_OP(dev_ops, alloc_hw_stats); + SET_DEVICE_OP(dev_ops, alloc_mr); + SET_DEVICE_OP(dev_ops, alloc_mw); + SET_DEVICE_OP(dev_ops, alloc_pd); + SET_DEVICE_OP(dev_ops, alloc_rdma_netdev); + SET_DEVICE_OP(dev_ops, alloc_ucontext); + SET_DEVICE_OP(dev_ops, alloc_xrcd); + SET_DEVICE_OP(dev_ops, attach_mcast); + SET_DEVICE_OP(dev_ops, check_mr_status); + SET_DEVICE_OP(dev_ops, create_ah); + SET_DEVICE_OP(dev_ops, create_counters); + SET_DEVICE_OP(dev_ops, create_cq); + SET_DEVICE_OP(dev_ops, create_flow); + SET_DEVICE_OP(dev_ops, create_flow_action_esp); + SET_DEVICE_OP(dev_ops, create_qp); + SET_DEVICE_OP(dev_ops, create_rwq_ind_table); + SET_DEVICE_OP(dev_ops, create_srq); + SET_DEVICE_OP(dev_ops, create_wq); + SET_DEVICE_OP(dev_ops, dealloc_dm); + SET_DEVICE_OP(dev_ops, dealloc_fmr); + SET_DEVICE_OP(dev_ops, dealloc_mw); + SET_DEVICE_OP(dev_ops, dealloc_pd); + SET_DEVICE_OP(dev_ops, dealloc_ucontext); + SET_DEVICE_OP(dev_ops, dealloc_xrcd); + SET_DEVICE_OP(dev_ops, del_gid); + SET_DEVICE_OP(dev_ops, dereg_mr); + SET_DEVICE_OP(dev_ops, destroy_ah); + SET_DEVICE_OP(dev_ops, destroy_counters); + SET_DEVICE_OP(dev_ops, destroy_cq); + SET_DEVICE_OP(dev_ops, destroy_flow); + SET_DEVICE_OP(dev_ops, destroy_flow_action); + SET_DEVICE_OP(dev_ops, destroy_qp); + SET_DEVICE_OP(dev_ops, destroy_rwq_ind_table); + SET_DEVICE_OP(dev_ops, destroy_srq); + SET_DEVICE_OP(dev_ops, destroy_wq); + SET_DEVICE_OP(dev_ops, detach_mcast); + SET_DEVICE_OP(dev_ops, disassociate_ucontext); + SET_DEVICE_OP(dev_ops, drain_rq); + SET_DEVICE_OP(dev_ops, drain_sq); + SET_DEVICE_OP(dev_ops, get_dev_fw_str); + SET_DEVICE_OP(dev_ops, get_dma_mr); + SET_DEVICE_OP(dev_ops, get_hw_stats); + SET_DEVICE_OP(dev_ops, get_link_layer); + SET_DEVICE_OP(dev_ops, get_netdev); + SET_DEVICE_OP(dev_ops, get_port_immutable); + SET_DEVICE_OP(dev_ops, get_vector_affinity); + SET_DEVICE_OP(dev_ops, get_vf_config); + SET_DEVICE_OP(dev_ops, get_vf_stats); + SET_DEVICE_OP(dev_ops, map_mr_sg); + SET_DEVICE_OP(dev_ops, map_phys_fmr); + SET_DEVICE_OP(dev_ops, mmap); + SET_DEVICE_OP(dev_ops, modify_ah); + SET_DEVICE_OP(dev_ops, modify_cq); + SET_DEVICE_OP(dev_ops, modify_device); + SET_DEVICE_OP(dev_ops, modify_flow_action_esp); + SET_DEVICE_OP(dev_ops, modify_port); + SET_DEVICE_OP(dev_ops, modify_qp); + SET_DEVICE_OP(dev_ops, modify_srq); + SET_DEVICE_OP(dev_ops, modify_wq); + SET_DEVICE_OP(dev_ops, peek_cq); + SET_DEVICE_OP(dev_ops, poll_cq); + SET_DEVICE_OP(dev_ops, post_recv); + SET_DEVICE_OP(dev_ops, post_send); + SET_DEVICE_OP(dev_ops, post_srq_recv); + SET_DEVICE_OP(dev_ops, process_mad); + SET_DEVICE_OP(dev_ops, query_ah); + SET_DEVICE_OP(dev_ops, query_device); + SET_DEVICE_OP(dev_ops, query_gid); + SET_DEVICE_OP(dev_ops, query_pkey); + SET_DEVICE_OP(dev_ops, query_port); + SET_DEVICE_OP(dev_ops, query_qp); + SET_DEVICE_OP(dev_ops, query_srq); + SET_DEVICE_OP(dev_ops, rdma_netdev_get_params); + SET_DEVICE_OP(dev_ops, read_counters); + SET_DEVICE_OP(dev_ops, reg_dm_mr); + SET_DEVICE_OP(dev_ops, reg_user_mr); + SET_DEVICE_OP(dev_ops, req_ncomp_notif); + SET_DEVICE_OP(dev_ops, req_notify_cq); + SET_DEVICE_OP(dev_ops, rereg_user_mr); + SET_DEVICE_OP(dev_ops, resize_cq); + SET_DEVICE_OP(dev_ops, set_vf_guid); + SET_DEVICE_OP(dev_ops, set_vf_link_state); + SET_DEVICE_OP(dev_ops, unmap_fmr); } EXPORT_SYMBOL(ib_set_device_ops); diff --git a/drivers/infiniband/core/fmr_pool.c b/drivers/infiniband/core/fmr_pool.c index b00dfd2ad31e..7d841b689a1e 100644 --- a/drivers/infiniband/core/fmr_pool.c +++ b/drivers/infiniband/core/fmr_pool.c @@ -211,8 +211,8 @@ struct ib_fmr_pool *ib_create_fmr_pool(struct ib_pd *pd, return ERR_PTR(-EINVAL); device = pd->device; - if (!device->alloc_fmr || !device->dealloc_fmr || - !device->map_phys_fmr || !device->unmap_fmr) { + if (!device->ops.alloc_fmr || !device->ops.dealloc_fmr || + !device->ops.map_phys_fmr || !device->ops.unmap_fmr) { dev_info(&device->dev, "Device does not support FMRs\n"); return ERR_PTR(-ENOSYS); } diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c index d7025cd5be28..7433888f4fc0 100644 --- a/drivers/infiniband/core/mad.c +++ b/drivers/infiniband/core/mad.c @@ -888,10 +888,10 @@ static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv, } /* No GRH for DR SMP */ - ret = device->process_mad(device, 0, port_num, &mad_wc, NULL, - (const struct ib_mad_hdr *)smp, mad_size, - (struct ib_mad_hdr *)mad_priv->mad, - &mad_size, &out_mad_pkey_index); + ret = device->ops.process_mad(device, 0, port_num, &mad_wc, NULL, + (const struct ib_mad_hdr *)smp, mad_size, + (struct ib_mad_hdr *)mad_priv->mad, + &mad_size, &out_mad_pkey_index); switch (ret) { case IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY: @@ -2305,14 +2305,14 @@ static void ib_mad_recv_done(struct ib_cq *cq, struct ib_wc *wc) } /* Give driver "right of first refusal" on incoming MAD */ - if (port_priv->device->process_mad) { - ret = port_priv->device->process_mad(port_priv->device, 0, - port_priv->port_num, - wc, &recv->grh, - (const struct ib_mad_hdr *)recv->mad, - recv->mad_size, - (struct ib_mad_hdr *)response->mad, - &mad_size, &resp_mad_pkey_index); + if (port_priv->device->ops.process_mad) { + ret = port_priv->device->ops.process_mad(port_priv->device, 0, + port_priv->port_num, + wc, &recv->grh, + (const struct ib_mad_hdr *)recv->mad, + recv->mad_size, + (struct ib_mad_hdr *)response->mad, + &mad_size, &resp_mad_pkey_index); if (opa) wc->pkey_index = resp_mad_pkey_index; diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c index 9abbadb9e366..093bbfcdc53b 100644 --- a/drivers/infiniband/core/nldev.c +++ b/drivers/infiniband/core/nldev.c @@ -259,8 +259,8 @@ static int fill_port_info(struct sk_buff *msg, if (nla_put_u8(msg, RDMA_NLDEV_ATTR_PORT_PHYS_STATE, attr.phys_state)) return -EMSGSIZE; - if (device->get_netdev) - netdev = device->get_netdev(device, port); + if (device->ops.get_netdev) + netdev = device->ops.get_netdev(device, port); if (netdev && net_eq(dev_net(netdev), net)) { ret = nla_put_u32(msg, diff --git a/drivers/infiniband/core/opa_smi.h b/drivers/infiniband/core/opa_smi.h index 3bfab3505a29..af4879bdf3d6 100644 --- a/drivers/infiniband/core/opa_smi.h +++ b/drivers/infiniband/core/opa_smi.h @@ -55,7 +55,7 @@ static inline enum smi_action opa_smi_check_local_smp(struct opa_smp *smp, { /* C14-9:3 -- We're at the end of the DR segment of path */ /* C14-9:4 -- Hop Pointer = Hop Count + 1 -> give to SMA/SM */ - return (device->process_mad && + return (device->ops.process_mad && !opa_get_smp_direction(smp) && (smp->hop_ptr == smp->hop_cnt + 1)) ? IB_SMI_HANDLE : IB_SMI_DISCARD; @@ -70,7 +70,7 @@ static inline enum smi_action opa_smi_check_local_returning_smp(struct opa_smp * { /* C14-13:3 -- We're at the end of the DR segment of path */ /* C14-13:4 -- Hop Pointer == 0 -> give to SM */ - return (device->process_mad && + return (device->ops.process_mad && opa_get_smp_direction(smp) && !smp->hop_ptr) ? IB_SMI_HANDLE : IB_SMI_DISCARD; } diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c index 7d2f1ef75025..6c4747e61d2b 100644 --- a/drivers/infiniband/core/rdma_core.c +++ b/drivers/infiniband/core/rdma_core.c @@ -820,8 +820,8 @@ static void ufile_destroy_ucontext(struct ib_uverbs_file *ufile, */ if (reason == RDMA_REMOVE_DRIVER_REMOVE) { uverbs_user_mmap_disassociate(ufile); - if (ib_dev->disassociate_ucontext) - ib_dev->disassociate_ucontext(ucontext); + if (ib_dev->ops.disassociate_ucontext) + ib_dev->ops.disassociate_ucontext(ucontext); } ib_rdmacg_uncharge(&ucontext->cg_obj, ib_dev, @@ -833,7 +833,7 @@ static void ufile_destroy_ucontext(struct ib_uverbs_file *ufile, * FIXME: Drivers are not permitted to fail dealloc_ucontext, remove * the error return. */ - ret = ib_dev->dealloc_ucontext(ucontext); + ret = ib_dev->ops.dealloc_ucontext(ucontext); WARN_ON(ret); ufile->ucontext = NULL; diff --git a/drivers/infiniband/core/security.c b/drivers/infiniband/core/security.c index 1143c0448666..1efadbccf394 100644 --- a/drivers/infiniband/core/security.c +++ b/drivers/infiniband/core/security.c @@ -626,10 +626,10 @@ int ib_security_modify_qp(struct ib_qp *qp, } if (!ret) - ret = real_qp->device->modify_qp(real_qp, - qp_attr, - qp_attr_mask, - udata); + ret = real_qp->device->ops.modify_qp(real_qp, + qp_attr, + qp_attr_mask, + udata); if (new_pps) { /* Clean up the lists and free the appropriate diff --git a/drivers/infiniband/core/smi.h b/drivers/infiniband/core/smi.h index 33c91c8a16e9..91d9b353ab85 100644 --- a/drivers/infiniband/core/smi.h +++ b/drivers/infiniband/core/smi.h @@ -67,7 +67,7 @@ static inline enum smi_action smi_check_local_smp(struct ib_smp *smp, { /* C14-9:3 -- We're at the end of the DR segment of path */ /* C14-9:4 -- Hop Pointer = Hop Count + 1 -> give to SMA/SM */ - return ((device->process_mad && + return ((device->ops.process_mad && !ib_get_smp_direction(smp) && (smp->hop_ptr == smp->hop_cnt + 1)) ? IB_SMI_HANDLE : IB_SMI_DISCARD); @@ -82,7 +82,7 @@ static inline enum smi_action smi_check_local_returning_smp(struct ib_smp *smp, { /* C14-13:3 -- We're at the end of the DR segment of path */ /* C14-13:4 -- Hop Pointer == 0 -> give to SM */ - return ((device->process_mad && + return ((device->ops.process_mad && ib_get_smp_direction(smp) && !smp->hop_ptr) ? IB_SMI_HANDLE : IB_SMI_DISCARD); } diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c index 6fcce2c206c6..80f68eb0ba5c 100644 --- a/drivers/infiniband/core/sysfs.c +++ b/drivers/infiniband/core/sysfs.c @@ -462,7 +462,7 @@ static int get_perf_mad(struct ib_device *dev, int port_num, __be16 attr, u16 out_mad_pkey_index = 0; ssize_t ret; - if (!dev->process_mad) + if (!dev->ops.process_mad) return -ENOSYS; in_mad = kzalloc(sizeof *in_mad, GFP_KERNEL); @@ -481,11 +481,11 @@ static int get_perf_mad(struct ib_device *dev, int port_num, __be16 attr, if (attr != IB_PMA_CLASS_PORT_INFO) in_mad->data[41] = port_num; /* PortSelect field */ - if ((dev->process_mad(dev, IB_MAD_IGNORE_MKEY, - port_num, NULL, NULL, - (const struct ib_mad_hdr *)in_mad, mad_size, - (struct ib_mad_hdr *)out_mad, &mad_size, - &out_mad_pkey_index) & + if ((dev->ops.process_mad(dev, IB_MAD_IGNORE_MKEY, + port_num, NULL, NULL, + (const struct ib_mad_hdr *)in_mad, mad_size, + (struct ib_mad_hdr *)out_mad, &mad_size, + &out_mad_pkey_index) & (IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY)) != (IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY)) { ret = -EINVAL; @@ -786,7 +786,7 @@ static int update_hw_stats(struct ib_device *dev, struct rdma_hw_stats *stats, if (time_is_after_eq_jiffies(stats->timestamp + stats->lifespan)) return 0; - ret = dev->get_hw_stats(dev, stats, port_num, index); + ret = dev->ops.get_hw_stats(dev, stats, port_num, index); if (ret < 0) return ret; if (ret == stats->num_counters) @@ -946,7 +946,7 @@ static void setup_hw_stats(struct ib_device *device, struct ib_port *port, struct rdma_hw_stats *stats; int i, ret; - stats = device->alloc_hw_stats(device, port_num); + stats = device->ops.alloc_hw_stats(device, port_num); if (!stats) return; @@ -964,8 +964,8 @@ static void setup_hw_stats(struct ib_device *device, struct ib_port *port, if (!hsag) goto err_free_stats; - ret = device->get_hw_stats(device, stats, port_num, - stats->num_counters); + ret = device->ops.get_hw_stats(device, stats, port_num, + stats->num_counters); if (ret != stats->num_counters) goto err_free_hsag; @@ -1057,7 +1057,7 @@ static int add_port(struct ib_device *device, int port_num, goto err_put; } - if (device->process_mad) { + if (device->ops.process_mad) { p->pma_table = get_counter_table(device, port_num); ret = sysfs_create_group(&p->kobj, p->pma_table); if (ret) @@ -1124,7 +1124,7 @@ static int add_port(struct ib_device *device, int port_num, * port, so holder should be device. Therefore skip per port conunter * initialization. */ - if (device->alloc_hw_stats && port_num) + if (device->ops.alloc_hw_stats && port_num) setup_hw_stats(device, p, port_num); list_add_tail(&p->kobj.entry, &device->port_list); @@ -1245,7 +1245,7 @@ static ssize_t node_desc_store(struct device *device, struct ib_device_modify desc = {}; int ret; - if (!dev->modify_device) + if (!dev->ops.modify_device) return -EIO; memcpy(desc.node_desc, buf, min_t(int, count, IB_DEVICE_NODE_DESC_MAX)); @@ -1341,7 +1341,7 @@ int ib_device_register_sysfs(struct ib_device *device, } } - if (device->alloc_hw_stats) + if (device->ops.alloc_hw_stats) setup_hw_stats(device, NULL, 0); return 0; diff --git a/drivers/infiniband/core/ucm.c b/drivers/infiniband/core/ucm.c index 73332b9a25b5..7541fbaf58a3 100644 --- a/drivers/infiniband/core/ucm.c +++ b/drivers/infiniband/core/ucm.c @@ -1242,7 +1242,7 @@ static void ib_ucm_add_one(struct ib_device *device) dev_t base; struct ib_ucm_device *ucm_dev; - if (!device->alloc_ucontext || !rdma_cap_ib_cm(device, 1)) + if (!device->ops.alloc_ucontext || !rdma_cap_ib_cm(device, 1)) return; ucm_dev = kzalloc(sizeof *ucm_dev, GFP_KERNEL); diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index b70749542471..9af6f9322961 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c @@ -218,7 +218,7 @@ static int ib_uverbs_get_context(struct uverbs_attr_bundle *attrs) if (ret) goto err; - ucontext = ib_dev->alloc_ucontext(ib_dev, &attrs->driver_udata); + ucontext = ib_dev->ops.alloc_ucontext(ib_dev, &attrs->driver_udata); if (IS_ERR(ucontext)) { ret = PTR_ERR(ucontext); goto err_alloc; @@ -280,7 +280,7 @@ static int ib_uverbs_get_context(struct uverbs_attr_bundle *attrs) put_unused_fd(resp.async_fd); err_free: - ib_dev->dealloc_ucontext(ucontext); + ib_dev->ops.dealloc_ucontext(ucontext); err_alloc: ib_rdmacg_uncharge(&cg_obj, ib_dev, RDMACG_RESOURCE_HCA_HANDLE); @@ -455,7 +455,7 @@ static int ib_uverbs_alloc_pd(struct uverbs_attr_bundle *attrs) if (IS_ERR(uobj)) return PTR_ERR(uobj); - pd = ib_dev->alloc_pd(ib_dev, uobj->context, &attrs->driver_udata); + pd = ib_dev->ops.alloc_pd(ib_dev, uobj->context, &attrs->driver_udata); if (IS_ERR(pd)) { ret = PTR_ERR(pd); goto err; @@ -632,8 +632,8 @@ static int ib_uverbs_open_xrcd(struct uverbs_attr_bundle *attrs) } if (!xrcd) { - xrcd = ib_dev->alloc_xrcd(ib_dev, obj->uobject.context, - &attrs->driver_udata); + xrcd = ib_dev->ops.alloc_xrcd(ib_dev, obj->uobject.context, + &attrs->driver_udata); if (IS_ERR(xrcd)) { ret = PTR_ERR(xrcd); goto err; @@ -772,8 +772,8 @@ static int ib_uverbs_reg_mr(struct uverbs_attr_bundle *attrs) } } - mr = pd->device->reg_user_mr(pd, cmd.start, cmd.length, cmd.hca_va, - cmd.access_flags, &attrs->driver_udata); + mr = pd->device->ops.reg_user_mr(pd, cmd.start, cmd.length, cmd.hca_va, + cmd.access_flags, &attrs->driver_udata); if (IS_ERR(mr)) { ret = PTR_ERR(mr); goto err_put; @@ -862,9 +862,9 @@ static int ib_uverbs_rereg_mr(struct uverbs_attr_bundle *attrs) } old_pd = mr->pd; - ret = mr->device->rereg_user_mr(mr, cmd.flags, cmd.start, cmd.length, - cmd.hca_va, cmd.access_flags, pd, - &attrs->driver_udata); + ret = mr->device->ops.rereg_user_mr(mr, cmd.flags, cmd.start, cmd.length, + cmd.hca_va, cmd.access_flags, pd, + &attrs->driver_udata); if (!ret) { if (cmd.flags & IB_MR_REREG_PD) { atomic_inc(&pd->usecnt); @@ -927,7 +927,7 @@ static int ib_uverbs_alloc_mw(struct uverbs_attr_bundle *attrs) goto err_free; } - mw = pd->device->alloc_mw(pd, cmd.mw_type, &attrs->driver_udata); + mw = pd->device->ops.alloc_mw(pd, cmd.mw_type, &attrs->driver_udata); if (IS_ERR(mw)) { ret = PTR_ERR(mw); goto err_put; @@ -1041,8 +1041,8 @@ static struct ib_ucq_object *create_cq(struct uverbs_attr_bundle *attrs, attr.comp_vector = cmd->comp_vector; attr.flags = cmd->flags; - cq = ib_dev->create_cq(ib_dev, &attr, obj->uobject.context, - &attrs->driver_udata); + cq = ib_dev->ops.create_cq(ib_dev, &attr, obj->uobject.context, + &attrs->driver_udata); if (IS_ERR(cq)) { ret = PTR_ERR(cq); goto err_file; @@ -1142,7 +1142,7 @@ static int ib_uverbs_resize_cq(struct uverbs_attr_bundle *attrs) if (!cq) return -EINVAL; - ret = cq->device->resize_cq(cq, cmd.cqe, &attrs->driver_udata); + ret = cq->device->ops.resize_cq(cq, cmd.cqe, &attrs->driver_udata); if (ret) goto out; @@ -2186,7 +2186,7 @@ static int ib_uverbs_post_send(struct uverbs_attr_bundle *attrs) } resp.bad_wr = 0; - ret = qp->device->post_send(qp->real_qp, wr, &bad_wr); + ret = qp->device->ops.post_send(qp->real_qp, wr, &bad_wr); if (ret) for (next = wr; next; next = next->next) { ++resp.bad_wr; @@ -2339,7 +2339,7 @@ static int ib_uverbs_post_recv(struct uverbs_attr_bundle *attrs) } resp.bad_wr = 0; - ret = qp->device->post_recv(qp->real_qp, wr, &bad_wr); + ret = qp->device->ops.post_recv(qp->real_qp, wr, &bad_wr); uobj_put_obj_read(qp); if (ret) { @@ -2389,7 +2389,7 @@ static int ib_uverbs_post_srq_recv(struct uverbs_attr_bundle *attrs) } resp.bad_wr = 0; - ret = srq->device->post_srq_recv(srq, wr, &bad_wr); + ret = srq->device->ops.post_srq_recv(srq, wr, &bad_wr); uobj_put_obj_read(srq); @@ -2959,7 +2959,7 @@ static int ib_uverbs_ex_create_wq(struct uverbs_attr_bundle *attrs) obj->uevent.events_reported = 0; INIT_LIST_HEAD(&obj->uevent.event_list); - wq = pd->device->create_wq(pd, &wq_init_attr, &attrs->driver_udata); + wq = pd->device->ops.create_wq(pd, &wq_init_attr, &attrs->driver_udata); if (IS_ERR(wq)) { err = PTR_ERR(wq); goto err_put_cq; @@ -3059,8 +3059,8 @@ static int ib_uverbs_ex_modify_wq(struct uverbs_attr_bundle *attrs) wq_attr.flags = cmd.flags; wq_attr.flags_mask = cmd.flags_mask; } - ret = wq->device->modify_wq(wq, &wq_attr, cmd.attr_mask, - &attrs->driver_udata); + ret = wq->device->ops.modify_wq(wq, &wq_attr, cmd.attr_mask, + &attrs->driver_udata); uobj_put_obj_read(wq); return ret; } @@ -3133,8 +3133,8 @@ static int ib_uverbs_ex_create_rwq_ind_table(struct uverbs_attr_bundle *attrs) init_attr.log_ind_tbl_size = cmd.log_ind_tbl_size; init_attr.ind_tbl = wqs; - rwq_ind_tbl = ib_dev->create_rwq_ind_table(ib_dev, &init_attr, - &attrs->driver_udata); + rwq_ind_tbl = ib_dev->ops.create_rwq_ind_table(ib_dev, &init_attr, + &attrs->driver_udata); if (IS_ERR(rwq_ind_tbl)) { err = PTR_ERR(rwq_ind_tbl); @@ -3321,8 +3321,8 @@ static int ib_uverbs_ex_create_flow(struct uverbs_attr_bundle *attrs) goto err_free; } - flow_id = qp->device->create_flow(qp, flow_attr, IB_FLOW_DOMAIN_USER, - &attrs->driver_udata); + flow_id = qp->device->ops.create_flow(qp, flow_attr, IB_FLOW_DOMAIN_USER, + &attrs->driver_udata); if (IS_ERR(flow_id)) { err = PTR_ERR(flow_id); @@ -3344,7 +3344,7 @@ static int ib_uverbs_ex_create_flow(struct uverbs_attr_bundle *attrs) kfree(kern_flow_attr); return uobj_alloc_commit(uobj); err_copy: - if (!qp->device->destroy_flow(flow_id)) + if (!qp->device->ops.destroy_flow(flow_id)) atomic_dec(&qp->usecnt); err_free: ib_uverbs_flow_resources_free(uflow_res); @@ -3439,7 +3439,7 @@ static int __uverbs_create_xsrq(struct uverbs_attr_bundle *attrs, obj->uevent.events_reported = 0; INIT_LIST_HEAD(&obj->uevent.event_list); - srq = pd->device->create_srq(pd, &attr, udata); + srq = pd->device->ops.create_srq(pd, &attr, udata); if (IS_ERR(srq)) { ret = PTR_ERR(srq); goto err_put; @@ -3561,8 +3561,8 @@ static int ib_uverbs_modify_srq(struct uverbs_attr_bundle *attrs) attr.max_wr = cmd.max_wr; attr.srq_limit = cmd.srq_limit; - ret = srq->device->modify_srq(srq, &attr, cmd.attr_mask, - &attrs->driver_udata); + ret = srq->device->ops.modify_srq(srq, &attr, cmd.attr_mask, + &attrs->driver_udata); uobj_put_obj_read(srq); @@ -3650,7 +3650,7 @@ static int ib_uverbs_ex_query_device(struct uverbs_attr_bundle *attrs) if (cmd.reserved) return -EINVAL; - err = ib_dev->query_device(ib_dev, &attr, &attrs->driver_udata); + err = ib_dev->ops.query_device(ib_dev, &attr, &attrs->driver_udata); if (err) return err; diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c index 96a5f89bbb75..9f9172eb1512 100644 --- a/drivers/infiniband/core/uverbs_main.c +++ b/drivers/infiniband/core/uverbs_main.c @@ -106,7 +106,7 @@ int uverbs_dealloc_mw(struct ib_mw *mw) struct ib_pd *pd = mw->pd; int ret; - ret = mw->device->dealloc_mw(mw); + ret = mw->device->ops.dealloc_mw(mw); if (!ret) atomic_dec(&pd->usecnt); return ret; @@ -197,7 +197,7 @@ void ib_uverbs_release_file(struct kref *ref) srcu_key = srcu_read_lock(&file->device->disassociate_srcu); ib_dev = srcu_dereference(file->device->ib_dev, &file->device->disassociate_srcu); - if (ib_dev && !ib_dev->disassociate_ucontext) + if (ib_dev && !ib_dev->ops.disassociate_ucontext) module_put(ib_dev->owner); srcu_read_unlock(&file->device->disassociate_srcu, srcu_key); @@ -774,7 +774,7 @@ static int ib_uverbs_mmap(struct file *filp, struct vm_area_struct *vma) goto out; } - ret = ucontext->device->mmap(ucontext, vma); + ret = ucontext->device->ops.mmap(ucontext, vma); out: srcu_read_unlock(&file->device->disassociate_srcu, srcu_key); return ret; @@ -1036,7 +1036,7 @@ static int ib_uverbs_open(struct inode *inode, struct file *filp) /* In case IB device supports disassociate ucontext, there is no hard * dependency between uverbs device and its low level device. */ - module_dependent = !(ib_dev->disassociate_ucontext); + module_dependent = !(ib_dev->ops.disassociate_ucontext); if (module_dependent) { if (!try_module_get(ib_dev->owner)) { @@ -1203,7 +1203,7 @@ static void ib_uverbs_add_one(struct ib_device *device) struct ib_uverbs_device *uverbs_dev; int ret; - if (!device->alloc_ucontext) + if (!device->ops.alloc_ucontext) return; uverbs_dev = kzalloc(sizeof(*uverbs_dev), GFP_KERNEL); @@ -1249,7 +1249,7 @@ static void ib_uverbs_add_one(struct ib_device *device) dev_set_name(&uverbs_dev->dev, "uverbs%d", uverbs_dev->devnum); cdev_init(&uverbs_dev->cdev, - device->mmap ? &uverbs_mmap_fops : &uverbs_fops); + device->ops.mmap ? &uverbs_mmap_fops : &uverbs_fops); uverbs_dev->cdev.owner = THIS_MODULE; ret = cdev_device_add(&uverbs_dev->cdev, &uverbs_dev->dev); @@ -1337,7 +1337,7 @@ static void ib_uverbs_remove_one(struct ib_device *device, void *client_data) cdev_device_del(&uverbs_dev->cdev, &uverbs_dev->dev); ida_free(&uverbs_ida, uverbs_dev->devnum); - if (device->disassociate_ucontext) { + if (device->ops.disassociate_ucontext) { /* We disassociate HW resources and immediately return. * Userspace will see a EIO errno for all future access. * Upon returning, ib_device may be freed internally and is not diff --git a/drivers/infiniband/core/uverbs_std_types.c b/drivers/infiniband/core/uverbs_std_types.c index 063aff9e7a04..424f325f8cba 100644 --- a/drivers/infiniband/core/uverbs_std_types.c +++ b/drivers/infiniband/core/uverbs_std_types.c @@ -54,7 +54,7 @@ static int uverbs_free_flow(struct ib_uobject *uobject, struct ib_qp *qp = flow->qp; int ret; - ret = flow->device->destroy_flow(flow); + ret = flow->device->ops.destroy_flow(flow); if (!ret) { if (qp) atomic_dec(&qp->usecnt); diff --git a/drivers/infiniband/core/uverbs_std_types_counters.c b/drivers/infiniband/core/uverbs_std_types_counters.c index 8835bad5c6dd..309c5e80988d 100644 --- a/drivers/infiniband/core/uverbs_std_types_counters.c +++ b/drivers/infiniband/core/uverbs_std_types_counters.c @@ -44,7 +44,7 @@ static int uverbs_free_counters(struct ib_uobject *uobject, if (ret) return ret; - return counters->device->destroy_counters(counters); + return counters->device->ops.destroy_counters(counters); } static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_CREATE)( @@ -61,10 +61,10 @@ static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_CREATE)( * have the ability to remove methods from parse tree once * such condition is met. */ - if (!ib_dev->create_counters) + if (!ib_dev->ops.create_counters) return -EOPNOTSUPP; - counters = ib_dev->create_counters(ib_dev, attrs); + counters = ib_dev->ops.create_counters(ib_dev, attrs); if (IS_ERR(counters)) { ret = PTR_ERR(counters); goto err_create_counters; @@ -90,7 +90,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_READ)( uverbs_attr_get_obj(attrs, UVERBS_ATTR_READ_COUNTERS_HANDLE); int ret; - if (!counters->device->read_counters) + if (!counters->device->ops.read_counters) return -EOPNOTSUPP; if (!atomic_read(&counters->usecnt)) @@ -109,7 +109,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_READ)( if (IS_ERR(read_attr.counters_buff)) return PTR_ERR(read_attr.counters_buff); - ret = counters->device->read_counters(counters, &read_attr, attrs); + ret = counters->device->ops.read_counters(counters, &read_attr, attrs); if (ret) return ret; diff --git a/drivers/infiniband/core/uverbs_std_types_cq.c b/drivers/infiniband/core/uverbs_std_types_cq.c index 859518eab583..42df59635a3c 100644 --- a/drivers/infiniband/core/uverbs_std_types_cq.c +++ b/drivers/infiniband/core/uverbs_std_types_cq.c @@ -71,7 +71,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_CQ_CREATE)( struct ib_uverbs_completion_event_file *ev_file = NULL; struct ib_uobject *ev_file_uobj; - if (!ib_dev->create_cq || !ib_dev->destroy_cq) + if (!ib_dev->ops.create_cq || !ib_dev->ops.destroy_cq) return -EOPNOTSUPP; ret = uverbs_copy_from(&attr.comp_vector, attrs, @@ -110,8 +110,8 @@ static int UVERBS_HANDLER(UVERBS_METHOD_CQ_CREATE)( INIT_LIST_HEAD(&obj->comp_list); INIT_LIST_HEAD(&obj->async_list); - cq = ib_dev->create_cq(ib_dev, &attr, obj->uobject.context, - &attrs->driver_udata); + cq = ib_dev->ops.create_cq(ib_dev, &attr, obj->uobject.context, + &attrs->driver_udata); if (IS_ERR(cq)) { ret = PTR_ERR(cq); goto err_event_file; diff --git a/drivers/infiniband/core/uverbs_std_types_dm.c b/drivers/infiniband/core/uverbs_std_types_dm.c index 658261b8f08e..2ef70637bee1 100644 --- a/drivers/infiniband/core/uverbs_std_types_dm.c +++ b/drivers/infiniband/core/uverbs_std_types_dm.c @@ -43,7 +43,7 @@ static int uverbs_free_dm(struct ib_uobject *uobject, if (ret) return ret; - return dm->device->dealloc_dm(dm); + return dm->device->ops.dealloc_dm(dm); } static int UVERBS_HANDLER(UVERBS_METHOD_DM_ALLOC)( @@ -57,7 +57,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_DM_ALLOC)( struct ib_dm *dm; int ret; - if (!ib_dev->alloc_dm) + if (!ib_dev->ops.alloc_dm) return -EOPNOTSUPP; ret = uverbs_copy_from(&attr.length, attrs, @@ -70,7 +70,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_DM_ALLOC)( if (ret) return ret; - dm = ib_dev->alloc_dm(ib_dev, uobj->context, &attr, attrs); + dm = ib_dev->ops.alloc_dm(ib_dev, uobj->context, &attr, attrs); if (IS_ERR(dm)) return PTR_ERR(dm); diff --git a/drivers/infiniband/core/uverbs_std_types_flow_action.c b/drivers/infiniband/core/uverbs_std_types_flow_action.c index e4d01fb5335d..4962b87fa600 100644 --- a/drivers/infiniband/core/uverbs_std_types_flow_action.c +++ b/drivers/infiniband/core/uverbs_std_types_flow_action.c @@ -43,7 +43,7 @@ static int uverbs_free_flow_action(struct ib_uobject *uobject, if (ret) return ret; - return action->device->destroy_flow_action(action); + return action->device->ops.destroy_flow_action(action); } static u64 esp_flags_uverbs_to_verbs(struct uverbs_attr_bundle *attrs, @@ -313,7 +313,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_CREATE)( struct ib_flow_action *action; struct ib_flow_action_esp_attr esp_attr = {}; - if (!ib_dev->create_flow_action_esp) + if (!ib_dev->ops.create_flow_action_esp) return -EOPNOTSUPP; ret = parse_flow_action_esp(ib_dev, attrs, &esp_attr, false); @@ -321,7 +321,8 @@ static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_CREATE)( return ret; /* No need to check as this attribute is marked as MANDATORY */ - action = ib_dev->create_flow_action_esp(ib_dev, &esp_attr.hdr, attrs); + action = ib_dev->ops.create_flow_action_esp(ib_dev, &esp_attr.hdr, + attrs); if (IS_ERR(action)) return PTR_ERR(action); @@ -340,7 +341,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_MODIFY)( int ret; struct ib_flow_action_esp_attr esp_attr = {}; - if (!action->device->modify_flow_action_esp) + if (!action->device->ops.modify_flow_action_esp) return -EOPNOTSUPP; ret = parse_flow_action_esp(action->device, attrs, &esp_attr, true); @@ -350,8 +351,9 @@ static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_MODIFY)( if (action->type != IB_FLOW_ACTION_ESP) return -EINVAL; - return action->device->modify_flow_action_esp(action, &esp_attr.hdr, - attrs); + return action->device->ops.modify_flow_action_esp(action, + &esp_attr.hdr, + attrs); } static const struct uverbs_attr_spec uverbs_flow_action_esp_keymat[] = { diff --git a/drivers/infiniband/core/uverbs_std_types_mr.c b/drivers/infiniband/core/uverbs_std_types_mr.c index 70ea48cfc047..cafb49a45515 100644 --- a/drivers/infiniband/core/uverbs_std_types_mr.c +++ b/drivers/infiniband/core/uverbs_std_types_mr.c @@ -54,7 +54,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_DM_MR_REG)( struct ib_mr *mr; int ret; - if (!ib_dev->reg_dm_mr) + if (!ib_dev->ops.reg_dm_mr) return -EOPNOTSUPP; ret = uverbs_copy_from(&attr.offset, attrs, UVERBS_ATTR_REG_DM_MR_OFFSET); @@ -83,7 +83,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_DM_MR_REG)( attr.length > dm->length - attr.offset) return -EINVAL; - mr = pd->device->reg_dm_mr(pd, dm, &attr, attrs); + mr = pd->device->ops.reg_dm_mr(pd, dm, &attr, attrs); if (IS_ERR(mr)) return PTR_ERR(mr); diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 178899e3ce73..9cb0f38fd55a 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -214,8 +214,8 @@ EXPORT_SYMBOL(rdma_node_get_transport); enum rdma_link_layer rdma_port_get_link_layer(struct ib_device *device, u8 port_num) { enum rdma_transport_type lt; - if (device->get_link_layer) - return device->get_link_layer(device, port_num); + if (device->ops.get_link_layer) + return device->ops.get_link_layer(device, port_num); lt = rdma_node_get_transport(device->node_type); if (lt == RDMA_TRANSPORT_IB) @@ -243,7 +243,7 @@ struct ib_pd *__ib_alloc_pd(struct ib_device *device, unsigned int flags, struct ib_pd *pd; int mr_access_flags = 0; - pd = device->alloc_pd(device, NULL, NULL); + pd = device->ops.alloc_pd(device, NULL, NULL); if (IS_ERR(pd)) return pd; @@ -270,7 +270,7 @@ struct ib_pd *__ib_alloc_pd(struct ib_device *device, unsigned int flags, if (mr_access_flags) { struct ib_mr *mr; - mr = pd->device->get_dma_mr(pd, mr_access_flags); + mr = pd->device->ops.get_dma_mr(pd, mr_access_flags); if (IS_ERR(mr)) { ib_dealloc_pd(pd); return ERR_CAST(mr); @@ -307,7 +307,7 @@ void ib_dealloc_pd(struct ib_pd *pd) int ret; if (pd->__internal_mr) { - ret = pd->device->dereg_mr(pd->__internal_mr); + ret = pd->device->ops.dereg_mr(pd->__internal_mr); WARN_ON(ret); pd->__internal_mr = NULL; } @@ -319,7 +319,7 @@ void ib_dealloc_pd(struct ib_pd *pd) rdma_restrack_del(&pd->res); /* Making delalloc_pd a void return is a WIP, no driver should return an error here. */ - ret = pd->device->dealloc_pd(pd); + ret = pd->device->ops.dealloc_pd(pd); WARN_ONCE(ret, "Infiniband HW driver failed dealloc_pd"); } EXPORT_SYMBOL(ib_dealloc_pd); @@ -479,10 +479,10 @@ static struct ib_ah *_rdma_create_ah(struct ib_pd *pd, { struct ib_ah *ah; - if (!pd->device->create_ah) + if (!pd->device->ops.create_ah) return ERR_PTR(-EOPNOTSUPP); - ah = pd->device->create_ah(pd, ah_attr, udata); + ah = pd->device->ops.create_ah(pd, ah_attr, udata); if (!IS_ERR(ah)) { ah->device = pd->device; @@ -888,8 +888,8 @@ int rdma_modify_ah(struct ib_ah *ah, struct rdma_ah_attr *ah_attr) if (ret) return ret; - ret = ah->device->modify_ah ? - ah->device->modify_ah(ah, ah_attr) : + ret = ah->device->ops.modify_ah ? + ah->device->ops.modify_ah(ah, ah_attr) : -EOPNOTSUPP; ah->sgid_attr = rdma_update_sgid_attr(ah_attr, ah->sgid_attr); @@ -902,8 +902,8 @@ int rdma_query_ah(struct ib_ah *ah, struct rdma_ah_attr *ah_attr) { ah_attr->grh.sgid_attr = NULL; - return ah->device->query_ah ? - ah->device->query_ah(ah, ah_attr) : + return ah->device->ops.query_ah ? + ah->device->ops.query_ah(ah, ah_attr) : -EOPNOTSUPP; } EXPORT_SYMBOL(rdma_query_ah); @@ -915,7 +915,7 @@ int rdma_destroy_ah(struct ib_ah *ah) int ret; pd = ah->pd; - ret = ah->device->destroy_ah(ah); + ret = ah->device->ops.destroy_ah(ah); if (!ret) { atomic_dec(&pd->usecnt); if (sgid_attr) @@ -933,10 +933,10 @@ struct ib_srq *ib_create_srq(struct ib_pd *pd, { struct ib_srq *srq; - if (!pd->device->create_srq) + if (!pd->device->ops.create_srq) return ERR_PTR(-EOPNOTSUPP); - srq = pd->device->create_srq(pd, srq_init_attr, NULL); + srq = pd->device->ops.create_srq(pd, srq_init_attr, NULL); if (!IS_ERR(srq)) { srq->device = pd->device; @@ -965,17 +965,17 @@ int ib_modify_srq(struct ib_srq *srq, struct ib_srq_attr *srq_attr, enum ib_srq_attr_mask srq_attr_mask) { - return srq->device->modify_srq ? - srq->device->modify_srq(srq, srq_attr, srq_attr_mask, NULL) : - -EOPNOTSUPP; + return srq->device->ops.modify_srq ? + srq->device->ops.modify_srq(srq, srq_attr, srq_attr_mask, + NULL) : -EOPNOTSUPP; } EXPORT_SYMBOL(ib_modify_srq); int ib_query_srq(struct ib_srq *srq, struct ib_srq_attr *srq_attr) { - return srq->device->query_srq ? - srq->device->query_srq(srq, srq_attr) : -EOPNOTSUPP; + return srq->device->ops.query_srq ? + srq->device->ops.query_srq(srq, srq_attr) : -EOPNOTSUPP; } EXPORT_SYMBOL(ib_query_srq); @@ -997,7 +997,7 @@ int ib_destroy_srq(struct ib_srq *srq) if (srq_type == IB_SRQT_XRC) xrcd = srq->ext.xrc.xrcd; - ret = srq->device->destroy_srq(srq); + ret = srq->device->ops.destroy_srq(srq); if (!ret) { atomic_dec(&pd->usecnt); if (srq_type == IB_SRQT_XRC) @@ -1106,7 +1106,7 @@ static struct ib_qp *ib_create_xrc_qp(struct ib_qp *qp, if (!IS_ERR(qp)) __ib_insert_xrcd_qp(qp_init_attr->xrcd, real_qp); else - real_qp->device->destroy_qp(real_qp); + real_qp->device->ops.destroy_qp(real_qp); return qp; } @@ -1692,10 +1692,10 @@ int ib_get_eth_speed(struct ib_device *dev, u8 port_num, u8 *speed, u8 *width) if (rdma_port_get_link_layer(dev, port_num) != IB_LINK_LAYER_ETHERNET) return -EINVAL; - if (!dev->get_netdev) + if (!dev->ops.get_netdev) return -EOPNOTSUPP; - netdev = dev->get_netdev(dev, port_num); + netdev = dev->ops.get_netdev(dev, port_num); if (!netdev) return -ENODEV; @@ -1753,9 +1753,9 @@ int ib_query_qp(struct ib_qp *qp, qp_attr->ah_attr.grh.sgid_attr = NULL; qp_attr->alt_ah_attr.grh.sgid_attr = NULL; - return qp->device->query_qp ? - qp->device->query_qp(qp->real_qp, qp_attr, qp_attr_mask, qp_init_attr) : - -EOPNOTSUPP; + return qp->device->ops.query_qp ? + qp->device->ops.query_qp(qp->real_qp, qp_attr, qp_attr_mask, + qp_init_attr) : -EOPNOTSUPP; } EXPORT_SYMBOL(ib_query_qp); @@ -1841,7 +1841,7 @@ int ib_destroy_qp(struct ib_qp *qp) rdma_rw_cleanup_mrs(qp); rdma_restrack_del(&qp->res); - ret = qp->device->destroy_qp(qp); + ret = qp->device->ops.destroy_qp(qp); if (!ret) { if (alt_path_sgid_attr) rdma_put_gid_attr(alt_path_sgid_attr); @@ -1879,7 +1879,7 @@ struct ib_cq *__ib_create_cq(struct ib_device *device, { struct ib_cq *cq; - cq = device->create_cq(device, cq_attr, NULL, NULL); + cq = device->ops.create_cq(device, cq_attr, NULL, NULL); if (!IS_ERR(cq)) { cq->device = device; @@ -1899,8 +1899,9 @@ EXPORT_SYMBOL(__ib_create_cq); int rdma_set_cq_moderation(struct ib_cq *cq, u16 cq_count, u16 cq_period) { - return cq->device->modify_cq ? - cq->device->modify_cq(cq, cq_count, cq_period) : -EOPNOTSUPP; + return cq->device->ops.modify_cq ? + cq->device->ops.modify_cq(cq, cq_count, + cq_period) : -EOPNOTSUPP; } EXPORT_SYMBOL(rdma_set_cq_moderation); @@ -1910,14 +1911,14 @@ int ib_destroy_cq(struct ib_cq *cq) return -EBUSY; rdma_restrack_del(&cq->res); - return cq->device->destroy_cq(cq); + return cq->device->ops.destroy_cq(cq); } EXPORT_SYMBOL(ib_destroy_cq); int ib_resize_cq(struct ib_cq *cq, int cqe) { - return cq->device->resize_cq ? - cq->device->resize_cq(cq, cqe, NULL) : -EOPNOTSUPP; + return cq->device->ops.resize_cq ? + cq->device->ops.resize_cq(cq, cqe, NULL) : -EOPNOTSUPP; } EXPORT_SYMBOL(ib_resize_cq); @@ -1930,7 +1931,7 @@ int ib_dereg_mr(struct ib_mr *mr) int ret; rdma_restrack_del(&mr->res); - ret = mr->device->dereg_mr(mr); + ret = mr->device->ops.dereg_mr(mr); if (!ret) { atomic_dec(&pd->usecnt); if (dm) @@ -1959,10 +1960,10 @@ struct ib_mr *ib_alloc_mr(struct ib_pd *pd, { struct ib_mr *mr; - if (!pd->device->alloc_mr) + if (!pd->device->ops.alloc_mr) return ERR_PTR(-EOPNOTSUPP); - mr = pd->device->alloc_mr(pd, mr_type, max_num_sg); + mr = pd->device->ops.alloc_mr(pd, mr_type, max_num_sg); if (!IS_ERR(mr)) { mr->device = pd->device; mr->pd = pd; @@ -1986,10 +1987,10 @@ struct ib_fmr *ib_alloc_fmr(struct ib_pd *pd, { struct ib_fmr *fmr; - if (!pd->device->alloc_fmr) + if (!pd->device->ops.alloc_fmr) return ERR_PTR(-EOPNOTSUPP); - fmr = pd->device->alloc_fmr(pd, mr_access_flags, fmr_attr); + fmr = pd->device->ops.alloc_fmr(pd, mr_access_flags, fmr_attr); if (!IS_ERR(fmr)) { fmr->device = pd->device; fmr->pd = pd; @@ -2008,7 +2009,7 @@ int ib_unmap_fmr(struct list_head *fmr_list) return 0; fmr = list_entry(fmr_list->next, struct ib_fmr, list); - return fmr->device->unmap_fmr(fmr_list); + return fmr->device->ops.unmap_fmr(fmr_list); } EXPORT_SYMBOL(ib_unmap_fmr); @@ -2018,7 +2019,7 @@ int ib_dealloc_fmr(struct ib_fmr *fmr) int ret; pd = fmr->pd; - ret = fmr->device->dealloc_fmr(fmr); + ret = fmr->device->ops.dealloc_fmr(fmr); if (!ret) atomic_dec(&pd->usecnt); @@ -2070,14 +2071,14 @@ int ib_attach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid) { int ret; - if (!qp->device->attach_mcast) + if (!qp->device->ops.attach_mcast) return -EOPNOTSUPP; if (!rdma_is_multicast_addr((struct in6_addr *)gid->raw) || qp->qp_type != IB_QPT_UD || !is_valid_mcast_lid(qp, lid)) return -EINVAL; - ret = qp->device->attach_mcast(qp, gid, lid); + ret = qp->device->ops.attach_mcast(qp, gid, lid); if (!ret) atomic_inc(&qp->usecnt); return ret; @@ -2088,14 +2089,14 @@ int ib_detach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid) { int ret; - if (!qp->device->detach_mcast) + if (!qp->device->ops.detach_mcast) return -EOPNOTSUPP; if (!rdma_is_multicast_addr((struct in6_addr *)gid->raw) || qp->qp_type != IB_QPT_UD || !is_valid_mcast_lid(qp, lid)) return -EINVAL; - ret = qp->device->detach_mcast(qp, gid, lid); + ret = qp->device->ops.detach_mcast(qp, gid, lid); if (!ret) atomic_dec(&qp->usecnt); return ret; @@ -2106,10 +2107,10 @@ struct ib_xrcd *__ib_alloc_xrcd(struct ib_device *device, const char *caller) { struct ib_xrcd *xrcd; - if (!device->alloc_xrcd) + if (!device->ops.alloc_xrcd) return ERR_PTR(-EOPNOTSUPP); - xrcd = device->alloc_xrcd(device, NULL, NULL); + xrcd = device->ops.alloc_xrcd(device, NULL, NULL); if (!IS_ERR(xrcd)) { xrcd->device = device; xrcd->inode = NULL; @@ -2137,7 +2138,7 @@ int ib_dealloc_xrcd(struct ib_xrcd *xrcd) return ret; } - return xrcd->device->dealloc_xrcd(xrcd); + return xrcd->device->ops.dealloc_xrcd(xrcd); } EXPORT_SYMBOL(ib_dealloc_xrcd); @@ -2160,10 +2161,10 @@ struct ib_wq *ib_create_wq(struct ib_pd *pd, { struct ib_wq *wq; - if (!pd->device->create_wq) + if (!pd->device->ops.create_wq) return ERR_PTR(-EOPNOTSUPP); - wq = pd->device->create_wq(pd, wq_attr, NULL); + wq = pd->device->ops.create_wq(pd, wq_attr, NULL); if (!IS_ERR(wq)) { wq->event_handler = wq_attr->event_handler; wq->wq_context = wq_attr->wq_context; @@ -2193,7 +2194,7 @@ int ib_destroy_wq(struct ib_wq *wq) if (atomic_read(&wq->usecnt)) return -EBUSY; - err = wq->device->destroy_wq(wq); + err = wq->device->ops.destroy_wq(wq); if (!err) { atomic_dec(&pd->usecnt); atomic_dec(&cq->usecnt); @@ -2215,10 +2216,10 @@ int ib_modify_wq(struct ib_wq *wq, struct ib_wq_attr *wq_attr, { int err; - if (!wq->device->modify_wq) + if (!wq->device->ops.modify_wq) return -EOPNOTSUPP; - err = wq->device->modify_wq(wq, wq_attr, wq_attr_mask, NULL); + err = wq->device->ops.modify_wq(wq, wq_attr, wq_attr_mask, NULL); return err; } EXPORT_SYMBOL(ib_modify_wq); @@ -2240,12 +2241,12 @@ struct ib_rwq_ind_table *ib_create_rwq_ind_table(struct ib_device *device, int i; u32 table_size; - if (!device->create_rwq_ind_table) + if (!device->ops.create_rwq_ind_table) return ERR_PTR(-EOPNOTSUPP); table_size = (1 << init_attr->log_ind_tbl_size); - rwq_ind_table = device->create_rwq_ind_table(device, - init_attr, NULL); + rwq_ind_table = device->ops.create_rwq_ind_table(device, + init_attr, NULL); if (IS_ERR(rwq_ind_table)) return rwq_ind_table; @@ -2275,7 +2276,7 @@ int ib_destroy_rwq_ind_table(struct ib_rwq_ind_table *rwq_ind_table) if (atomic_read(&rwq_ind_table->usecnt)) return -EBUSY; - err = rwq_ind_table->device->destroy_rwq_ind_table(rwq_ind_table); + err = rwq_ind_table->device->ops.destroy_rwq_ind_table(rwq_ind_table); if (!err) { for (i = 0; i < table_size; i++) atomic_dec(&ind_tbl[i]->usecnt); @@ -2288,48 +2289,50 @@ EXPORT_SYMBOL(ib_destroy_rwq_ind_table); int ib_check_mr_status(struct ib_mr *mr, u32 check_mask, struct ib_mr_status *mr_status) { - return mr->device->check_mr_status ? - mr->device->check_mr_status(mr, check_mask, mr_status) : -EOPNOTSUPP; + if (!mr->device->ops.check_mr_status) + return -EOPNOTSUPP; + + return mr->device->ops.check_mr_status(mr, check_mask, mr_status); } EXPORT_SYMBOL(ib_check_mr_status); int ib_set_vf_link_state(struct ib_device *device, int vf, u8 port, int state) { - if (!device->set_vf_link_state) + if (!device->ops.set_vf_link_state) return -EOPNOTSUPP; - return device->set_vf_link_state(device, vf, port, state); + return device->ops.set_vf_link_state(device, vf, port, state); } EXPORT_SYMBOL(ib_set_vf_link_state); int ib_get_vf_config(struct ib_device *device, int vf, u8 port, struct ifla_vf_info *info) { - if (!device->get_vf_config) + if (!device->ops.get_vf_config) return -EOPNOTSUPP; - return device->get_vf_config(device, vf, port, info); + return device->ops.get_vf_config(device, vf, port, info); } EXPORT_SYMBOL(ib_get_vf_config); int ib_get_vf_stats(struct ib_device *device, int vf, u8 port, struct ifla_vf_stats *stats) { - if (!device->get_vf_stats) + if (!device->ops.get_vf_stats) return -EOPNOTSUPP; - return device->get_vf_stats(device, vf, port, stats); + return device->ops.get_vf_stats(device, vf, port, stats); } EXPORT_SYMBOL(ib_get_vf_stats); int ib_set_vf_guid(struct ib_device *device, int vf, u8 port, u64 guid, int type) { - if (!device->set_vf_guid) + if (!device->ops.set_vf_guid) return -EOPNOTSUPP; - return device->set_vf_guid(device, vf, port, guid, type); + return device->ops.set_vf_guid(device, vf, port, guid, type); } EXPORT_SYMBOL(ib_set_vf_guid); @@ -2361,12 +2364,12 @@ EXPORT_SYMBOL(ib_set_vf_guid); int ib_map_mr_sg(struct ib_mr *mr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset, unsigned int page_size) { - if (unlikely(!mr->device->map_mr_sg)) + if (unlikely(!mr->device->ops.map_mr_sg)) return -EOPNOTSUPP; mr->page_size = page_size; - return mr->device->map_mr_sg(mr, sg, sg_nents, sg_offset); + return mr->device->ops.map_mr_sg(mr, sg, sg_nents, sg_offset); } EXPORT_SYMBOL(ib_map_mr_sg); @@ -2565,8 +2568,8 @@ static void __ib_drain_rq(struct ib_qp *qp) */ void ib_drain_sq(struct ib_qp *qp) { - if (qp->device->drain_sq) - qp->device->drain_sq(qp); + if (qp->device->ops.drain_sq) + qp->device->ops.drain_sq(qp); else __ib_drain_sq(qp); } @@ -2593,8 +2596,8 @@ EXPORT_SYMBOL(ib_drain_sq); */ void ib_drain_rq(struct ib_qp *qp) { - if (qp->device->drain_rq) - qp->device->drain_rq(qp); + if (qp->device->ops.drain_rq) + qp->device->ops.drain_rq(qp); else __ib_drain_rq(qp); } @@ -2632,10 +2635,11 @@ struct net_device *rdma_alloc_netdev(struct ib_device *device, u8 port_num, struct net_device *netdev; int rc; - if (!device->rdma_netdev_get_params) + if (!device->ops.rdma_netdev_get_params) return ERR_PTR(-EOPNOTSUPP); - rc = device->rdma_netdev_get_params(device, port_num, type, ¶ms); + rc = device->ops.rdma_netdev_get_params(device, port_num, type, + ¶ms); if (rc) return ERR_PTR(rc); @@ -2657,10 +2661,11 @@ int rdma_init_netdev(struct ib_device *device, u8 port_num, struct rdma_netdev_alloc_params params; int rc; - if (!device->rdma_netdev_get_params) + if (!device->ops.rdma_netdev_get_params) return -EOPNOTSUPP; - rc = device->rdma_netdev_get_params(device, port_num, type, ¶ms); + rc = device->ops.rdma_netdev_get_params(device, port_num, type, + ¶ms); if (rc) return rc; diff --git a/drivers/infiniband/hw/i40iw/i40iw_cm.c b/drivers/infiniband/hw/i40iw/i40iw_cm.c index 771eb6bd0785..ef137c40205c 100644 --- a/drivers/infiniband/hw/i40iw/i40iw_cm.c +++ b/drivers/infiniband/hw/i40iw/i40iw_cm.c @@ -3478,7 +3478,7 @@ static void i40iw_qp_disconnect(struct i40iw_qp *iwqp) /* Need to free the Last Streaming Mode Message */ if (iwqp->ietf_mem.va) { if (iwqp->lsmm_mr) - iwibdev->ibdev.dereg_mr(iwqp->lsmm_mr); + iwibdev->ibdev.ops.dereg_mr(iwqp->lsmm_mr); i40iw_free_dma_mem(iwdev->sc_dev.hw, &iwqp->ietf_mem); } } diff --git a/drivers/infiniband/hw/mlx4/alias_GUID.c b/drivers/infiniband/hw/mlx4/alias_GUID.c index 155b4dfc0ae8..e44d817d7d87 100644 --- a/drivers/infiniband/hw/mlx4/alias_GUID.c +++ b/drivers/infiniband/hw/mlx4/alias_GUID.c @@ -849,7 +849,7 @@ int mlx4_ib_init_alias_guid_service(struct mlx4_ib_dev *dev) spin_lock_init(&dev->sriov.alias_guid.ag_work_lock); for (i = 1; i <= dev->num_ports; ++i) { - if (dev->ib_dev.query_gid(&dev->ib_dev , i, 0, &gid)) { + if (dev->ib_dev.ops.query_gid(&dev->ib_dev , i, 0, &gid)) { ret = -EFAULT; goto err_unregister; } diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 27da1910123a..013d772fa2c4 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -150,7 +150,7 @@ static int get_port_state(struct ib_device *ibdev, int ret; memset(&attr, 0, sizeof(attr)); - ret = ibdev->query_port(ibdev, port_num, &attr); + ret = ibdev->ops.query_port(ibdev, port_num, &attr); if (!ret) *state = attr.state; return ret; diff --git a/drivers/infiniband/hw/nes/nes_cm.c b/drivers/infiniband/hw/nes/nes_cm.c index 2b67ace5b614..032883180f65 100644 --- a/drivers/infiniband/hw/nes/nes_cm.c +++ b/drivers/infiniband/hw/nes/nes_cm.c @@ -3033,7 +3033,7 @@ static int nes_disconnect(struct nes_qp *nesqp, int abrupt) /* Need to free the Last Streaming Mode Message */ if (nesqp->ietf_frame) { if (nesqp->lsmm_mr) - nesibdev->ibdev.dereg_mr(nesqp->lsmm_mr); + nesibdev->ibdev.ops.dereg_mr(nesqp->lsmm_mr); pci_free_consistent(nesdev->pcidev, nesqp->private_data_len + nesqp->ietf_frame_size, nesqp->ietf_frame, nesqp->ietf_frame_pbase); diff --git a/drivers/infiniband/sw/rdmavt/vt.c b/drivers/infiniband/sw/rdmavt/vt.c index c52b38fe2416..aef3aa3fe667 100644 --- a/drivers/infiniband/sw/rdmavt/vt.c +++ b/drivers/infiniband/sw/rdmavt/vt.c @@ -456,31 +456,31 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) * rdmavt does not support modify device currently drivers must * provide. */ - if (!rdi->ibdev.modify_device) + if (!rdi->ibdev.ops.modify_device) return -EOPNOTSUPP; break; case QUERY_PORT: - if (!rdi->ibdev.query_port) + if (!rdi->ibdev.ops.query_port) if (!rdi->driver_f.query_port_state) return -EINVAL; break; case MODIFY_PORT: - if (!rdi->ibdev.modify_port) + if (!rdi->ibdev.ops.modify_port) if (!rdi->driver_f.cap_mask_chg || !rdi->driver_f.shut_down_port) return -EINVAL; break; case QUERY_GID: - if (!rdi->ibdev.query_gid) + if (!rdi->ibdev.ops.query_gid) if (!rdi->driver_f.get_guid_be) return -EINVAL; break; case CREATE_QP: - if (!rdi->ibdev.create_qp) + if (!rdi->ibdev.ops.create_qp) if (!rdi->driver_f.qp_priv_alloc || !rdi->driver_f.qp_priv_free || !rdi->driver_f.notify_qp_reset || @@ -491,7 +491,7 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) break; case MODIFY_QP: - if (!rdi->ibdev.modify_qp) + if (!rdi->ibdev.ops.modify_qp) if (!rdi->driver_f.notify_qp_reset || !rdi->driver_f.schedule_send || !rdi->driver_f.get_pmtu_from_attr || @@ -505,7 +505,7 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) break; case DESTROY_QP: - if (!rdi->ibdev.destroy_qp) + if (!rdi->ibdev.ops.destroy_qp) if (!rdi->driver_f.qp_priv_free || !rdi->driver_f.notify_qp_reset || !rdi->driver_f.flush_qp_waiters || @@ -515,7 +515,7 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) break; case POST_SEND: - if (!rdi->ibdev.post_send) + if (!rdi->ibdev.ops.post_send) if (!rdi->driver_f.schedule_send || !rdi->driver_f.do_send || !rdi->post_parms) diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c index 8710214594d8..5224c42f9d08 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_main.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c @@ -2453,8 +2453,8 @@ static struct net_device *ipoib_add_port(const char *format, return ERR_PTR(result); } - if (hca->rdma_netdev_get_params) { - int rc = hca->rdma_netdev_get_params(hca, port, + if (hca->ops.rdma_netdev_get_params) { + int rc = hca->ops.rdma_netdev_get_params(hca, port, RDMA_NETDEV_IPOIB, ¶ms); diff --git a/drivers/infiniband/ulp/iser/iser_memory.c b/drivers/infiniband/ulp/iser/iser_memory.c index dbe97c02848c..e9b7efc302d0 100644 --- a/drivers/infiniband/ulp/iser/iser_memory.c +++ b/drivers/infiniband/ulp/iser/iser_memory.c @@ -77,8 +77,8 @@ int iser_assign_reg_ops(struct iser_device *device) struct ib_device *ib_dev = device->ib_device; /* Assign function handles - based on FMR support */ - if (ib_dev->alloc_fmr && ib_dev->dealloc_fmr && - ib_dev->map_phys_fmr && ib_dev->unmap_fmr) { + if (ib_dev->ops.alloc_fmr && ib_dev->ops.dealloc_fmr && + ib_dev->ops.map_phys_fmr && ib_dev->ops.unmap_fmr) { iser_info("FMR supported, using FMR for registration\n"); device->reg_ops = &fmr_ops; } else if (ib_dev->attrs.device_cap_flags & IB_DEVICE_MEM_MGT_EXTENSIONS) { diff --git a/drivers/infiniband/ulp/opa_vnic/opa_vnic_netdev.c b/drivers/infiniband/ulp/opa_vnic/opa_vnic_netdev.c index 61558788b3fa..ae70cd18903e 100644 --- a/drivers/infiniband/ulp/opa_vnic/opa_vnic_netdev.c +++ b/drivers/infiniband/ulp/opa_vnic/opa_vnic_netdev.c @@ -330,10 +330,10 @@ struct opa_vnic_adapter *opa_vnic_add_netdev(struct ib_device *ibdev, struct rdma_netdev *rn; int rc; - netdev = ibdev->alloc_rdma_netdev(ibdev, port_num, - RDMA_NETDEV_OPA_VNIC, - "veth%d", NET_NAME_UNKNOWN, - ether_setup); + netdev = ibdev->ops.alloc_rdma_netdev(ibdev, port_num, + RDMA_NETDEV_OPA_VNIC, + "veth%d", NET_NAME_UNKNOWN, + ether_setup); if (!netdev) return ERR_PTR(-ENOMEM); else if (IS_ERR(netdev)) diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index eed0eb3bb04c..e58146d020bc 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -4063,8 +4063,10 @@ static void srp_add_one(struct ib_device *device) srp_dev->max_pages_per_mr = min_t(u64, SRP_MAX_PAGES_PER_MR, max_pages_per_mr); - srp_dev->has_fmr = (device->alloc_fmr && device->dealloc_fmr && - device->map_phys_fmr && device->unmap_fmr); + srp_dev->has_fmr = (device->ops.alloc_fmr && + device->ops.dealloc_fmr && + device->ops.map_phys_fmr && + device->ops.unmap_fmr); srp_dev->has_fr = (attr->device_cap_flags & IB_DEVICE_MEM_MGT_EXTENSIONS); if (!never_register && !srp_dev->has_fmr && !srp_dev->has_fr) { diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c index e94a8d1d08a3..a568dac7b3a1 100644 --- a/fs/cifs/smbdirect.c +++ b/fs/cifs/smbdirect.c @@ -1724,7 +1724,7 @@ static struct smbd_connection *_smbd_get_connection( info->responder_resources); /* Need to send IRD/ORD in private data for iWARP */ - info->id->device->get_port_immutable( + info->id->device->ops.get_port_immutable( info->id->device, info->id->port_num, &port_immutable); if (port_immutable.core_cap_flags & RDMA_CORE_PORT_IWARP) { ird_ord_hdr[0] = info->responder_resources; diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 3a02c72e7620..0ea494c976f1 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -2503,7 +2503,7 @@ struct ib_device_ops { struct ib_device { /* Do not access @dma_device directly from ULP nor from HW drivers. */ struct device *dma_device; - + struct ib_device_ops ops; char name[IB_DEVICE_NAME_MAX]; struct list_head event_handler_list; @@ -2528,273 +2528,6 @@ struct ib_device { struct iw_cm_verbs *iwcm; - /** - * alloc_hw_stats - Allocate a struct rdma_hw_stats and fill in the - * driver initialized data. The struct is kfree()'ed by the sysfs - * core when the device is removed. A lifespan of -1 in the return - * struct tells the core to set a default lifespan. - */ - struct rdma_hw_stats *(*alloc_hw_stats)(struct ib_device *device, - u8 port_num); - /** - * get_hw_stats - Fill in the counter value(s) in the stats struct. - * @index - The index in the value array we wish to have updated, or - * num_counters if we want all stats updated - * Return codes - - * < 0 - Error, no counters updated - * index - Updated the single counter pointed to by index - * num_counters - Updated all counters (will reset the timestamp - * and prevent further calls for lifespan milliseconds) - * Drivers are allowed to update all counters in lieu of just the - * one given in index at their option - */ - int (*get_hw_stats)(struct ib_device *device, - struct rdma_hw_stats *stats, - u8 port, int index); - int (*query_device)(struct ib_device *device, - struct ib_device_attr *device_attr, - struct ib_udata *udata); - int (*query_port)(struct ib_device *device, - u8 port_num, - struct ib_port_attr *port_attr); - enum rdma_link_layer (*get_link_layer)(struct ib_device *device, - u8 port_num); - /* When calling get_netdev, the HW vendor's driver should return the - * net device of device @device at port @port_num or NULL if such - * a net device doesn't exist. The vendor driver should call dev_hold - * on this net device. The HW vendor's device driver must guarantee - * that this function returns NULL before the net device has finished - * NETDEV_UNREGISTER state. - */ - struct net_device *(*get_netdev)(struct ib_device *device, - u8 port_num); - /* query_gid should be return GID value for @device, when @port_num - * link layer is either IB or iWarp. It is no-op if @port_num port - * is RoCE link layer. - */ - int (*query_gid)(struct ib_device *device, - u8 port_num, int index, - union ib_gid *gid); - /* When calling add_gid, the HW vendor's driver should add the gid - * of device of port at gid index available at @attr. Meta-info of - * that gid (for example, the network device related to this gid) is - * available at @attr. @context allows the HW vendor driver to store - * extra information together with a GID entry. The HW vendor driver may - * allocate memory to contain this information and store it in @context - * when a new GID entry is written to. Params are consistent until the - * next call of add_gid or delete_gid. The function should return 0 on - * success or error otherwise. The function could be called - * concurrently for different ports. This function is only called when - * roce_gid_table is used. - */ - int (*add_gid)(const struct ib_gid_attr *attr, - void **context); - /* When calling del_gid, the HW vendor's driver should delete the - * gid of device @device at gid index gid_index of port port_num - * available in @attr. - * Upon the deletion of a GID entry, the HW vendor must free any - * allocated memory. The caller will clear @context afterwards. - * This function is only called when roce_gid_table is used. - */ - int (*del_gid)(const struct ib_gid_attr *attr, - void **context); - int (*query_pkey)(struct ib_device *device, - u8 port_num, u16 index, u16 *pkey); - int (*modify_device)(struct ib_device *device, - int device_modify_mask, - struct ib_device_modify *device_modify); - int (*modify_port)(struct ib_device *device, - u8 port_num, int port_modify_mask, - struct ib_port_modify *port_modify); - struct ib_ucontext * (*alloc_ucontext)(struct ib_device *device, - struct ib_udata *udata); - int (*dealloc_ucontext)(struct ib_ucontext *context); - int (*mmap)(struct ib_ucontext *context, - struct vm_area_struct *vma); - struct ib_pd * (*alloc_pd)(struct ib_device *device, - struct ib_ucontext *context, - struct ib_udata *udata); - int (*dealloc_pd)(struct ib_pd *pd); - struct ib_ah * (*create_ah)(struct ib_pd *pd, - struct rdma_ah_attr *ah_attr, - struct ib_udata *udata); - int (*modify_ah)(struct ib_ah *ah, - struct rdma_ah_attr *ah_attr); - int (*query_ah)(struct ib_ah *ah, - struct rdma_ah_attr *ah_attr); - int (*destroy_ah)(struct ib_ah *ah); - struct ib_srq * (*create_srq)(struct ib_pd *pd, - struct ib_srq_init_attr *srq_init_attr, - struct ib_udata *udata); - int (*modify_srq)(struct ib_srq *srq, - struct ib_srq_attr *srq_attr, - enum ib_srq_attr_mask srq_attr_mask, - struct ib_udata *udata); - int (*query_srq)(struct ib_srq *srq, - struct ib_srq_attr *srq_attr); - int (*destroy_srq)(struct ib_srq *srq); - int (*post_srq_recv)(struct ib_srq *srq, - const struct ib_recv_wr *recv_wr, - const struct ib_recv_wr **bad_recv_wr); - struct ib_qp * (*create_qp)(struct ib_pd *pd, - struct ib_qp_init_attr *qp_init_attr, - struct ib_udata *udata); - int (*modify_qp)(struct ib_qp *qp, - struct ib_qp_attr *qp_attr, - int qp_attr_mask, - struct ib_udata *udata); - int (*query_qp)(struct ib_qp *qp, - struct ib_qp_attr *qp_attr, - int qp_attr_mask, - struct ib_qp_init_attr *qp_init_attr); - int (*destroy_qp)(struct ib_qp *qp); - int (*post_send)(struct ib_qp *qp, - const struct ib_send_wr *send_wr, - const struct ib_send_wr **bad_send_wr); - int (*post_recv)(struct ib_qp *qp, - const struct ib_recv_wr *recv_wr, - const struct ib_recv_wr **bad_recv_wr); - struct ib_cq * (*create_cq)(struct ib_device *device, - const struct ib_cq_init_attr *attr, - struct ib_ucontext *context, - struct ib_udata *udata); - int (*modify_cq)(struct ib_cq *cq, u16 cq_count, - u16 cq_period); - int (*destroy_cq)(struct ib_cq *cq); - int (*resize_cq)(struct ib_cq *cq, int cqe, - struct ib_udata *udata); - int (*poll_cq)(struct ib_cq *cq, int num_entries, - struct ib_wc *wc); - int (*peek_cq)(struct ib_cq *cq, int wc_cnt); - int (*req_notify_cq)(struct ib_cq *cq, - enum ib_cq_notify_flags flags); - int (*req_ncomp_notif)(struct ib_cq *cq, - int wc_cnt); - struct ib_mr * (*get_dma_mr)(struct ib_pd *pd, - int mr_access_flags); - struct ib_mr * (*reg_user_mr)(struct ib_pd *pd, - u64 start, u64 length, - u64 virt_addr, - int mr_access_flags, - struct ib_udata *udata); - int (*rereg_user_mr)(struct ib_mr *mr, - int flags, - u64 start, u64 length, - u64 virt_addr, - int mr_access_flags, - struct ib_pd *pd, - struct ib_udata *udata); - int (*dereg_mr)(struct ib_mr *mr); - struct ib_mr * (*alloc_mr)(struct ib_pd *pd, - enum ib_mr_type mr_type, - u32 max_num_sg); - int (*map_mr_sg)(struct ib_mr *mr, - struct scatterlist *sg, - int sg_nents, - unsigned int *sg_offset); - struct ib_mw * (*alloc_mw)(struct ib_pd *pd, - enum ib_mw_type type, - struct ib_udata *udata); - int (*dealloc_mw)(struct ib_mw *mw); - struct ib_fmr * (*alloc_fmr)(struct ib_pd *pd, - int mr_access_flags, - struct ib_fmr_attr *fmr_attr); - int (*map_phys_fmr)(struct ib_fmr *fmr, - u64 *page_list, int list_len, - u64 iova); - int (*unmap_fmr)(struct list_head *fmr_list); - int (*dealloc_fmr)(struct ib_fmr *fmr); - int (*attach_mcast)(struct ib_qp *qp, - union ib_gid *gid, - u16 lid); - int (*detach_mcast)(struct ib_qp *qp, - union ib_gid *gid, - u16 lid); - int (*process_mad)(struct ib_device *device, - int process_mad_flags, - u8 port_num, - const struct ib_wc *in_wc, - const struct ib_grh *in_grh, - const struct ib_mad_hdr *in_mad, - size_t in_mad_size, - struct ib_mad_hdr *out_mad, - size_t *out_mad_size, - u16 *out_mad_pkey_index); - struct ib_xrcd * (*alloc_xrcd)(struct ib_device *device, - struct ib_ucontext *ucontext, - struct ib_udata *udata); - int (*dealloc_xrcd)(struct ib_xrcd *xrcd); - struct ib_flow * (*create_flow)(struct ib_qp *qp, - struct ib_flow_attr - *flow_attr, - int domain, - struct ib_udata *udata); - int (*destroy_flow)(struct ib_flow *flow_id); - int (*check_mr_status)(struct ib_mr *mr, u32 check_mask, - struct ib_mr_status *mr_status); - void (*disassociate_ucontext)(struct ib_ucontext *ibcontext); - void (*drain_rq)(struct ib_qp *qp); - void (*drain_sq)(struct ib_qp *qp); - int (*set_vf_link_state)(struct ib_device *device, int vf, u8 port, - int state); - int (*get_vf_config)(struct ib_device *device, int vf, u8 port, - struct ifla_vf_info *ivf); - int (*get_vf_stats)(struct ib_device *device, int vf, u8 port, - struct ifla_vf_stats *stats); - int (*set_vf_guid)(struct ib_device *device, int vf, u8 port, u64 guid, - int type); - struct ib_wq * (*create_wq)(struct ib_pd *pd, - struct ib_wq_init_attr *init_attr, - struct ib_udata *udata); - int (*destroy_wq)(struct ib_wq *wq); - int (*modify_wq)(struct ib_wq *wq, - struct ib_wq_attr *attr, - u32 wq_attr_mask, - struct ib_udata *udata); - struct ib_rwq_ind_table * (*create_rwq_ind_table)(struct ib_device *device, - struct ib_rwq_ind_table_init_attr *init_attr, - struct ib_udata *udata); - int (*destroy_rwq_ind_table)(struct ib_rwq_ind_table *wq_ind_table); - struct ib_flow_action * (*create_flow_action_esp)(struct ib_device *device, - const struct ib_flow_action_attrs_esp *attr, - struct uverbs_attr_bundle *attrs); - int (*destroy_flow_action)(struct ib_flow_action *action); - int (*modify_flow_action_esp)(struct ib_flow_action *action, - const struct ib_flow_action_attrs_esp *attr, - struct uverbs_attr_bundle *attrs); - struct ib_dm * (*alloc_dm)(struct ib_device *device, - struct ib_ucontext *context, - struct ib_dm_alloc_attr *attr, - struct uverbs_attr_bundle *attrs); - int (*dealloc_dm)(struct ib_dm *dm); - struct ib_mr * (*reg_dm_mr)(struct ib_pd *pd, struct ib_dm *dm, - struct ib_dm_mr_attr *attr, - struct uverbs_attr_bundle *attrs); - struct ib_counters * (*create_counters)(struct ib_device *device, - struct uverbs_attr_bundle *attrs); - int (*destroy_counters)(struct ib_counters *counters); - int (*read_counters)(struct ib_counters *counters, - struct ib_counters_read_attr *counters_read_attr, - struct uverbs_attr_bundle *attrs); - - /** - * rdma netdev operation - * - * Driver implementing alloc_rdma_netdev or rdma_netdev_get_params - * must return -EOPNOTSUPP if it doesn't support the specified type. - */ - struct net_device *(*alloc_rdma_netdev)( - struct ib_device *device, - u8 port_num, - enum rdma_netdev_t type, - const char *name, - unsigned char name_assign_type, - void (*setup)(struct net_device *)); - - int (*rdma_netdev_get_params)(struct ib_device *device, u8 port_num, - enum rdma_netdev_t type, - struct rdma_netdev_alloc_params *params); - struct module *owner; struct device dev; /* First group for device attributes, @@ -2836,17 +2569,6 @@ struct ib_device { */ struct rdma_restrack_root res; - /** - * The following mandatory functions are used only at device - * registration. Keep functions such as these at the end of this - * structure to avoid cache line misses when accessing struct ib_device - * in fast paths. - */ - int (*get_port_immutable)(struct ib_device *, u8, struct ib_port_immutable *); - void (*get_dev_fw_str)(struct ib_device *, char *str); - const struct cpumask *(*get_vector_affinity)(struct ib_device *ibdev, - int comp_vector); - const struct uapi_definition *driver_def; enum rdma_driver_id driver_id; /* @@ -3361,7 +3083,7 @@ static inline bool rdma_cap_roce_gid_table(const struct ib_device *device, u8 port_num) { return rdma_protocol_roce(device, port_num) && - device->add_gid && device->del_gid; + device->ops.add_gid && device->ops.del_gid; } /* @@ -3585,7 +3307,8 @@ static inline int ib_post_srq_recv(struct ib_srq *srq, { const struct ib_recv_wr *dummy; - return srq->device->post_srq_recv(srq, recv_wr, bad_recv_wr ? : &dummy); + return srq->device->ops.post_srq_recv(srq, recv_wr, + bad_recv_wr ? : &dummy); } /** @@ -3688,7 +3411,7 @@ static inline int ib_post_send(struct ib_qp *qp, { const struct ib_send_wr *dummy; - return qp->device->post_send(qp, send_wr, bad_send_wr ? : &dummy); + return qp->device->ops.post_send(qp, send_wr, bad_send_wr ? : &dummy); } /** @@ -3705,7 +3428,7 @@ static inline int ib_post_recv(struct ib_qp *qp, { const struct ib_recv_wr *dummy; - return qp->device->post_recv(qp, recv_wr, bad_recv_wr ? : &dummy); + return qp->device->ops.post_recv(qp, recv_wr, bad_recv_wr ? : &dummy); } struct ib_cq *__ib_alloc_cq(struct ib_device *dev, void *private, @@ -3778,7 +3501,7 @@ int ib_destroy_cq(struct ib_cq *cq); static inline int ib_poll_cq(struct ib_cq *cq, int num_entries, struct ib_wc *wc) { - return cq->device->poll_cq(cq, num_entries, wc); + return cq->device->ops.poll_cq(cq, num_entries, wc); } /** @@ -3811,7 +3534,7 @@ static inline int ib_poll_cq(struct ib_cq *cq, int num_entries, static inline int ib_req_notify_cq(struct ib_cq *cq, enum ib_cq_notify_flags flags) { - return cq->device->req_notify_cq(cq, flags); + return cq->device->ops.req_notify_cq(cq, flags); } /** @@ -3823,8 +3546,8 @@ static inline int ib_req_notify_cq(struct ib_cq *cq, */ static inline int ib_req_ncomp_notif(struct ib_cq *cq, int wc_cnt) { - return cq->device->req_ncomp_notif ? - cq->device->req_ncomp_notif(cq, wc_cnt) : + return cq->device->ops.req_ncomp_notif ? + cq->device->ops.req_ncomp_notif(cq, wc_cnt) : -ENOSYS; } @@ -4088,7 +3811,7 @@ static inline int ib_map_phys_fmr(struct ib_fmr *fmr, u64 *page_list, int list_len, u64 iova) { - return fmr->device->map_phys_fmr(fmr, page_list, list_len, iova); + return fmr->device->ops.map_phys_fmr(fmr, page_list, list_len, iova); } /** @@ -4441,10 +4164,10 @@ static inline const struct cpumask * ib_get_vector_affinity(struct ib_device *device, int comp_vector) { if (comp_vector < 0 || comp_vector >= device->num_comp_vectors || - !device->get_vector_affinity) + !device->ops.get_vector_affinity) return NULL; - return device->get_vector_affinity(device, comp_vector); + return device->ops.get_vector_affinity(device, comp_vector); } diff --git a/include/rdma/uverbs_ioctl.h b/include/rdma/uverbs_ioctl.h index 2f56844fb7da..220dd324f870 100644 --- a/include/rdma/uverbs_ioctl.h +++ b/include/rdma/uverbs_ioctl.h @@ -419,10 +419,10 @@ struct uapi_definition { .kind = UAPI_DEF_IS_SUPPORTED_DEV_FN, \ .scope = UAPI_SCOPE_OBJECT, \ .needs_fn_offset = \ - offsetof(struct ib_device, ibdev_fn) + \ + offsetof(struct ib_device_ops, ibdev_fn) + \ BUILD_BUG_ON_ZERO( \ - sizeof(((struct ib_device *)0)->ibdev_fn) != \ - sizeof(void *)), \ + sizeof(((struct ib_device_ops *)0)->ibdev_fn) != \ + sizeof(void *)), \ } /* @@ -434,10 +434,10 @@ struct uapi_definition { .kind = UAPI_DEF_IS_SUPPORTED_DEV_FN, \ .scope = UAPI_SCOPE_METHOD, \ .needs_fn_offset = \ - offsetof(struct ib_device, ibdev_fn) + \ + offsetof(struct ib_device_ops, ibdev_fn) + \ BUILD_BUG_ON_ZERO( \ - sizeof(((struct ib_device *)0)->ibdev_fn) != \ - sizeof(void *)), \ + sizeof(((struct ib_device_ops *)0)->ibdev_fn) != \ + sizeof(void *)), \ } /* Call a function to determine if the entire object is supported or not */ diff --git a/net/rds/ib.c b/net/rds/ib.c index eba75c1ba359..9d7b7586f240 100644 --- a/net/rds/ib.c +++ b/net/rds/ib.c @@ -148,8 +148,8 @@ static void rds_ib_add_one(struct ib_device *device) has_fr = (device->attrs.device_cap_flags & IB_DEVICE_MEM_MGT_EXTENSIONS); - has_fmr = (device->alloc_fmr && device->dealloc_fmr && - device->map_phys_fmr && device->unmap_fmr); + has_fmr = (device->ops.alloc_fmr && device->ops.dealloc_fmr && + device->ops.map_phys_fmr && device->ops.unmap_fmr); rds_ibdev->use_fastreg = (has_fr && !has_fmr); rds_ibdev->fmr_max_remaps = device->attrs.max_map_per_fmr?: 32; diff --git a/net/sunrpc/xprtrdma/fmr_ops.c b/net/sunrpc/xprtrdma/fmr_ops.c index 7f5632cd5a48..fd8fea59fe92 100644 --- a/net/sunrpc/xprtrdma/fmr_ops.c +++ b/net/sunrpc/xprtrdma/fmr_ops.c @@ -41,7 +41,7 @@ enum { bool fmr_is_supported(struct rpcrdma_ia *ia) { - if (!ia->ri_device->alloc_fmr) { + if (!ia->ri_device->ops.alloc_fmr) { pr_info("rpcrdma: 'fmr' mode is not supported by device %s\n", ia->ri_device->name); return false;