From patchwork Mon Nov 5 11:35:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667867 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8786415A6 for ; Mon, 5 Nov 2018 11:35:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 75B9B29702 for ; Mon, 5 Nov 2018 11:35:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 699AC2978B; Mon, 5 Nov 2018 11:35:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4F25729702 for ; Mon, 5 Nov 2018 11:35:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726368AbeKEUzF (ORCPT ); Mon, 5 Nov 2018 15:55:05 -0500 Received: from mail-wr1-f65.google.com ([209.85.221.65]:33212 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726358AbeKEUzF (ORCPT ); Mon, 5 Nov 2018 15:55:05 -0500 Received: by mail-wr1-f65.google.com with SMTP id u1-v6so9141525wrn.0 for ; Mon, 05 Nov 2018 03:35:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=wGu25p6boIm/hlBt8HOMS3wN91LKOVKuzWAl0BOslqk=; b=Onxkc+9lAZHehzKx1BHLfJgh/GX0AhuMRhla1eVT82LGsKp2LqgSvZVQmqjct5oZgA mCl5nxD/kuRznqsAuufpXTBEFb5WaUe8dAXdCKOPLkc93gBFD19ww3gdLHJGksEV07PY dAz/ZQxmZ2rQsmrvt0TFSgetJ3gNZCwR1YUcL5RwbgJbxPOf0OwaZg4Oe9GGvkQ3gFpS eD2vB4xQZ0oqLlqAlHS5A3zT3rbmDdc8XwLHna/Ie8HWP8qIJ6VS9zo8N3Q2UMKnv2YP T5FSrtr2gfHxUyHv1ItGM8f5++M7SU6n0eDAs2V0twlRUTpgIM1zlDcKNck3x6oyZ9Q5 +7zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=wGu25p6boIm/hlBt8HOMS3wN91LKOVKuzWAl0BOslqk=; b=ao4H31dWp+lAzbpelqTK4tDhZVM6af6cnsRrQ2XTxmil7NpD1KHyKNP3AJCRDRih+O aA0X8Ii4xciQoTowDoSnyRnjlw5/qbhmnRoQwl/RRmOsW7Szhpes1/xtZTMqBuv0R1oA BFqNXPyKSAXmf7wBFFFKGsGuUKvWJ2RMS6TJSyVPFQwk4kya0MwjAC7hFgRw0ilyUfPf dqp4IW5SBi8fqStG7uXV1sJ7zqtXPBcajG0D/mC49hRWUyaCU7WAtPbwwCSBov9HdylX Zr3jkpZ8wBbNin6WJcGLsF6pzCbRfOSy1blxn9CZyis70G6bxCoUHTCRb6Vo/pqYTwXK Opdg== X-Gm-Message-State: AGRZ1gLUlKknJIvd7nr5iyJo6P3oA0mNYO0ryZSzHHht7T/E1ZRliH4Z 1ZDAMy537YnRFP3N/fQuDZg= X-Google-Smtp-Source: AJdET5fCpsJIK6A75AVfs0ZFBRobcMFaOHymsqlh9uho8CWlkKzot9uFRkMWNGonXRvw2QMDlMvSCg== X-Received: by 2002:adf:8b8f:: with SMTP id o15-v6mr19114862wra.81.1541417745210; Mon, 05 Nov 2018 03:35:45 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.35.43 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:35:44 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 01/20] RDMA/core: Introduce ib_device_ops Date: Mon, 5 Nov 2018 13:35:09 +0200 Message-Id: <20181105113528.8317-2-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This change introduce the ib_device_ops structure that defines all the InfiniBand device operations in one place, so the code will be more readable and clean, unlike today that the ops are mixed with ib_device data members. The providers will need to define the supported operations and assign them using ib_device_ops(), that will also make the providers code more readable and clean. Signed-off-by: Kamal Heib --- drivers/infiniband/core/device.c | 97 +++++++++++++++ include/rdma/ib_verbs.h | 259 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 356 insertions(+) diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c index 87eb4f2cdd7d..2fefb9d694dc 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c @@ -1198,6 +1198,103 @@ struct net_device *ib_get_net_dev_by_params(struct ib_device *dev, } EXPORT_SYMBOL(ib_get_net_dev_by_params); +void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops) +{ +#define SET_DEVICE_OP(ptr, name) \ + do { \ + if (ops->name) \ + if (!((ptr)->name)) \ + (ptr)->name = ops->name; \ + } while (0) + + SET_DEVICE_OP(dev, query_device); + SET_DEVICE_OP(dev, modify_device); + SET_DEVICE_OP(dev, get_dev_fw_str); + SET_DEVICE_OP(dev, get_vector_affinity); + SET_DEVICE_OP(dev, query_port); + SET_DEVICE_OP(dev, modify_port); + SET_DEVICE_OP(dev, get_port_immutable); + SET_DEVICE_OP(dev, get_link_layer); + SET_DEVICE_OP(dev, get_netdev); + SET_DEVICE_OP(dev, alloc_rdma_netdev); + SET_DEVICE_OP(dev, query_gid); + SET_DEVICE_OP(dev, add_gid); + SET_DEVICE_OP(dev, del_gid); + SET_DEVICE_OP(dev, query_pkey); + SET_DEVICE_OP(dev, alloc_ucontext); + SET_DEVICE_OP(dev, dealloc_ucontext); + SET_DEVICE_OP(dev, mmap); + SET_DEVICE_OP(dev, disassociate_ucontext); + SET_DEVICE_OP(dev, alloc_pd); + SET_DEVICE_OP(dev, dealloc_pd); + SET_DEVICE_OP(dev, create_ah); + SET_DEVICE_OP(dev, modify_ah); + SET_DEVICE_OP(dev, query_ah); + SET_DEVICE_OP(dev, destroy_ah); + SET_DEVICE_OP(dev, create_srq); + SET_DEVICE_OP(dev, modify_srq); + SET_DEVICE_OP(dev, query_srq); + SET_DEVICE_OP(dev, destroy_srq); + SET_DEVICE_OP(dev, post_srq_recv); + SET_DEVICE_OP(dev, create_qp); + SET_DEVICE_OP(dev, modify_qp); + SET_DEVICE_OP(dev, query_qp); + SET_DEVICE_OP(dev, destroy_qp); + SET_DEVICE_OP(dev, post_send); + SET_DEVICE_OP(dev, post_recv); + SET_DEVICE_OP(dev, drain_rq); + SET_DEVICE_OP(dev, drain_sq); + SET_DEVICE_OP(dev, create_cq); + SET_DEVICE_OP(dev, modify_cq); + SET_DEVICE_OP(dev, destroy_cq); + SET_DEVICE_OP(dev, resize_cq); + SET_DEVICE_OP(dev, poll_cq); + SET_DEVICE_OP(dev, peek_cq); + SET_DEVICE_OP(dev, req_notify_cq); + SET_DEVICE_OP(dev, req_ncomp_notif); + SET_DEVICE_OP(dev, get_dma_mr); + SET_DEVICE_OP(dev, reg_user_mr); + SET_DEVICE_OP(dev, rereg_user_mr); + SET_DEVICE_OP(dev, dereg_mr); + SET_DEVICE_OP(dev, alloc_mr); + SET_DEVICE_OP(dev, map_mr_sg); + SET_DEVICE_OP(dev, check_mr_status); + SET_DEVICE_OP(dev, alloc_mw); + SET_DEVICE_OP(dev, dealloc_mw); + SET_DEVICE_OP(dev, alloc_fmr); + SET_DEVICE_OP(dev, map_phys_fmr); + SET_DEVICE_OP(dev, unmap_fmr); + SET_DEVICE_OP(dev, dealloc_fmr); + SET_DEVICE_OP(dev, attach_mcast); + SET_DEVICE_OP(dev, detach_mcast); + SET_DEVICE_OP(dev, process_mad); + SET_DEVICE_OP(dev, alloc_xrcd); + SET_DEVICE_OP(dev, dealloc_xrcd); + SET_DEVICE_OP(dev, create_flow); + SET_DEVICE_OP(dev, destroy_flow); + SET_DEVICE_OP(dev, create_flow_action_esp); + SET_DEVICE_OP(dev, destroy_flow_action); + SET_DEVICE_OP(dev, modify_flow_action_esp); + SET_DEVICE_OP(dev, set_vf_link_state); + SET_DEVICE_OP(dev, get_vf_config); + SET_DEVICE_OP(dev, get_vf_stats); + SET_DEVICE_OP(dev, set_vf_guid); + SET_DEVICE_OP(dev, create_wq); + SET_DEVICE_OP(dev, destroy_wq); + SET_DEVICE_OP(dev, modify_wq); + SET_DEVICE_OP(dev, create_rwq_ind_table); + SET_DEVICE_OP(dev, destroy_rwq_ind_table); + SET_DEVICE_OP(dev, alloc_dm); + SET_DEVICE_OP(dev, dealloc_dm); + SET_DEVICE_OP(dev, reg_dm_mr); + SET_DEVICE_OP(dev, create_counters); + SET_DEVICE_OP(dev, destroy_counters); + SET_DEVICE_OP(dev, read_counters); + SET_DEVICE_OP(dev, alloc_hw_stats); + SET_DEVICE_OP(dev, get_hw_stats); +} +EXPORT_SYMBOL(ib_set_device_ops); + static const struct rdma_nl_cbs ibnl_ls_cb_table[RDMA_NL_LS_NUM_OPS] = { [RDMA_NL_LS_OP_RESOLVE] = { .doit = ib_nl_handle_resolve_resp, diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index b17eea0373cb..29f0bd55a592 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -2246,6 +2246,263 @@ struct ib_counters_read_attr { struct uverbs_attr_bundle; +/** + * struct ib_device_ops - InfiniBand device operations + * This structure defines all the InfiniBand device operations, providers will + * need to define the supported operations, otherwise they will be set to null. + */ +struct ib_device_ops { + /* Device operations */ + int (*query_device)(struct ib_device *device, + struct ib_device_attr *device_attr, + struct ib_udata *udata); + int (*modify_device)(struct ib_device *device, int device_modify_mask, + struct ib_device_modify *device_modify); + void (*get_dev_fw_str)(struct ib_device *, char *str); + const struct cpumask *(*get_vector_affinity)(struct ib_device *ibdev, + int comp_vector); + /* Port operations */ + int (*query_port)(struct ib_device *device, u8 port_num, + struct ib_port_attr *port_attr); + int (*modify_port)(struct ib_device *device, u8 port_num, + int port_modify_mask, + struct ib_port_modify *port_modify); + /** + * The following mandatory functions are used only at device + * registration. Keep functions such as these at the end of this + * structure to avoid cache line misses when accessing struct ib_device + * in fast paths. + */ + int (*get_port_immutable)(struct ib_device *, u8, + struct ib_port_immutable *); + enum rdma_link_layer (*get_link_layer)(struct ib_device *device, + u8 port_num); + /** + * When calling get_netdev, the HW vendor's driver should return the + * net device of device @device at port @port_num or NULL if such + * a net device doesn't exist. The vendor driver should call dev_hold + * on this net device. The HW vendor's device driver must guarantee + * that this function returns NULL before the net device has finished + * NETDEV_UNREGISTER state. + */ + struct net_device *(*get_netdev)(struct ib_device *device, u8 port_num); + /** + * rdma netdev operation + * + * Driver implementing alloc_rdma_netdev must return -EOPNOTSUPP if it + * doesn't support the specified rdma netdev type. + */ + struct net_device *(*alloc_rdma_netdev)( + struct ib_device *device, u8 port_num, enum rdma_netdev_t type, + const char *name, unsigned char name_assign_type, + void (*setup)(struct net_device *)); + /* GID operations */ + /** + * query_gid should be return GID value for @device, when @port_num + * link layer is either IB or iWarp. It is no-op if @port_num port + * is RoCE link layer. + */ + int (*query_gid)(struct ib_device *device, u8 port_num, int index, + union ib_gid *gid); + /** + * When calling add_gid, the HW vendor's driver should add the gid + * of device of port at gid index available at @attr. Meta-info of + * that gid (for example, the network device related to this gid) is + * available at @attr. @context allows the HW vendor driver to store + * extra information together with a GID entry. The HW vendor driver may + * allocate memory to contain this information and store it in @context + * when a new GID entry is written to. Params are consistent until the + * next call of add_gid or delete_gid. The function should return 0 on + * success or error otherwise. The function could be called + * concurrently for different ports. This function is only called when + * roce_gid_table is used. + */ + int (*add_gid)(const struct ib_gid_attr *attr, void **context); + /** + * When calling del_gid, the HW vendor's driver should delete the + * gid of device @device at gid index gid_index of port port_num + * available in @attr. + * Upon the deletion of a GID entry, the HW vendor must free any + * allocated memory. The caller will clear @context afterwards. + * This function is only called when roce_gid_table is used. + */ + int (*del_gid)(const struct ib_gid_attr *attr, void **context); + /* PKey operations */ + int (*query_pkey)(struct ib_device *device, u8 port_num, u16 index, + u16 *pkey); + /* Ucontext operations */ + struct ib_ucontext *(*alloc_ucontext)(struct ib_device *device, + struct ib_udata *udata); + int (*dealloc_ucontext)(struct ib_ucontext *context); + int (*mmap)(struct ib_ucontext *context, struct vm_area_struct *vma); + void (*disassociate_ucontext)(struct ib_ucontext *ibcontext); + /* PD operations */ + struct ib_pd *(*alloc_pd)(struct ib_device *device, + struct ib_ucontext *context, + struct ib_udata *udata); + int (*dealloc_pd)(struct ib_pd *pd); + /* AH operations */ + struct ib_ah *(*create_ah)(struct ib_pd *pd, + struct rdma_ah_attr *ah_attr, + struct ib_udata *udata); + int (*modify_ah)(struct ib_ah *ah, struct rdma_ah_attr *ah_attr); + int (*query_ah)(struct ib_ah *ah, struct rdma_ah_attr *ah_attr); + int (*destroy_ah)(struct ib_ah *ah); + /* SRQ operations */ + struct ib_srq *(*create_srq)(struct ib_pd *pd, + struct ib_srq_init_attr *srq_init_attr, + struct ib_udata *udata); + int (*modify_srq)(struct ib_srq *srq, struct ib_srq_attr *srq_attr, + enum ib_srq_attr_mask srq_attr_mask, + struct ib_udata *udata); + int (*query_srq)(struct ib_srq *srq, struct ib_srq_attr *srq_attr); + int (*destroy_srq)(struct ib_srq *srq); + int (*post_srq_recv)(struct ib_srq *srq, + const struct ib_recv_wr *recv_wr, + const struct ib_recv_wr **bad_recv_wr); + /* QP operations */ + struct ib_qp *(*create_qp)(struct ib_pd *pd, + struct ib_qp_init_attr *qp_init_attr, + struct ib_udata *udata); + int (*modify_qp)(struct ib_qp *qp, struct ib_qp_attr *qp_attr, + int qp_attr_mask, struct ib_udata *udata); + int (*query_qp)(struct ib_qp *qp, struct ib_qp_attr *qp_attr, + int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr); + int (*destroy_qp)(struct ib_qp *qp); + int (*post_send)(struct ib_qp *qp, const struct ib_send_wr *send_wr, + const struct ib_send_wr **bad_send_wr); + int (*post_recv)(struct ib_qp *qp, const struct ib_recv_wr *recv_wr, + const struct ib_recv_wr **bad_recv_wr); + void (*drain_rq)(struct ib_qp *qp); + void (*drain_sq)(struct ib_qp *qp); + /* CQ operations */ + struct ib_cq *(*create_cq)(struct ib_device *device, + const struct ib_cq_init_attr *attr, + struct ib_ucontext *context, + struct ib_udata *udata); + int (*modify_cq)(struct ib_cq *cq, u16 cq_count, u16 cq_period); + int (*destroy_cq)(struct ib_cq *cq); + int (*resize_cq)(struct ib_cq *cq, int cqe, struct ib_udata *udata); + int (*poll_cq)(struct ib_cq *cq, int num_entries, struct ib_wc *wc); + int (*peek_cq)(struct ib_cq *cq, int wc_cnt); + int (*req_notify_cq)(struct ib_cq *cq, enum ib_cq_notify_flags flags); + int (*req_ncomp_notif)(struct ib_cq *cq, int wc_cnt); + /* MR operations */ + struct ib_mr *(*get_dma_mr)(struct ib_pd *pd, int mr_access_flags); + struct ib_mr *(*reg_user_mr)(struct ib_pd *pd, u64 start, u64 length, + u64 virt_addr, int mr_access_flags, + struct ib_udata *udata); + int (*rereg_user_mr)(struct ib_mr *mr, int flags, u64 start, u64 length, + u64 virt_addr, int mr_access_flags, + struct ib_pd *pd, struct ib_udata *udata); + int (*dereg_mr)(struct ib_mr *mr); + struct ib_mr *(*alloc_mr)(struct ib_pd *pd, enum ib_mr_type mr_type, + u32 max_num_sg); + int (*map_mr_sg)(struct ib_mr *mr, struct scatterlist *sg, int sg_nents, + unsigned int *sg_offset); + int (*check_mr_status)(struct ib_mr *mr, u32 check_mask, + struct ib_mr_status *mr_status); + /* MW operations */ + struct ib_mw *(*alloc_mw)(struct ib_pd *pd, enum ib_mw_type type, + struct ib_udata *udata); + int (*dealloc_mw)(struct ib_mw *mw); + /* FMR operations */ + struct ib_fmr *(*alloc_fmr)(struct ib_pd *pd, int mr_access_flags, + struct ib_fmr_attr *fmr_attr); + int (*map_phys_fmr)(struct ib_fmr *fmr, u64 *page_list, int list_len, + u64 iova); + int (*unmap_fmr)(struct list_head *fmr_list); + int (*dealloc_fmr)(struct ib_fmr *fmr); + /* Multicast operations */ + int (*attach_mcast)(struct ib_qp *qp, union ib_gid *gid, u16 lid); + int (*detach_mcast)(struct ib_qp *qp, union ib_gid *gid, u16 lid); + /* MAD operations */ + int (*process_mad)(struct ib_device *device, int process_mad_flags, + u8 port_num, const struct ib_wc *in_wc, + const struct ib_grh *in_grh, + const struct ib_mad_hdr *in_mad, size_t in_mad_size, + struct ib_mad_hdr *out_mad, size_t *out_mad_size, + u16 *out_mad_pkey_index); + /* XRCD operations */ + struct ib_xrcd *(*alloc_xrcd)(struct ib_device *device, + struct ib_ucontext *ucontext, + struct ib_udata *udata); + int (*dealloc_xrcd)(struct ib_xrcd *xrcd); + /* Flow operations */ + struct ib_flow *(*create_flow)(struct ib_qp *qp, + struct ib_flow_attr *flow_attr, + int domain, struct ib_udata *udata); + int (*destroy_flow)(struct ib_flow *flow_id); + struct ib_flow_action *(*create_flow_action_esp)( + struct ib_device *device, + const struct ib_flow_action_attrs_esp *attr, + struct uverbs_attr_bundle *attrs); + int (*destroy_flow_action)(struct ib_flow_action *action); + int (*modify_flow_action_esp)( + struct ib_flow_action *action, + const struct ib_flow_action_attrs_esp *attr, + struct uverbs_attr_bundle *attrs); + /* SRIOV operations */ + int (*set_vf_link_state)(struct ib_device *device, int vf, u8 port, + int state); + int (*get_vf_config)(struct ib_device *device, int vf, u8 port, + struct ifla_vf_info *ivf); + int (*get_vf_stats)(struct ib_device *device, int vf, u8 port, + struct ifla_vf_stats *stats); + int (*set_vf_guid)(struct ib_device *device, int vf, u8 port, u64 guid, + int type); + /* WQ operations */ + struct ib_wq *(*create_wq)(struct ib_pd *pd, + struct ib_wq_init_attr *init_attr, + struct ib_udata *udata); + int (*destroy_wq)(struct ib_wq *wq); + int (*modify_wq)(struct ib_wq *wq, struct ib_wq_attr *attr, + u32 wq_attr_mask, struct ib_udata *udata); + struct ib_rwq_ind_table *(*create_rwq_ind_table)( + struct ib_device *device, + struct ib_rwq_ind_table_init_attr *init_attr, + struct ib_udata *udata); + int (*destroy_rwq_ind_table)(struct ib_rwq_ind_table *wq_ind_table); + /* DM operations */ + struct ib_dm *(*alloc_dm)(struct ib_device *device, + struct ib_ucontext *context, + struct ib_dm_alloc_attr *attr, + struct uverbs_attr_bundle *attrs); + int (*dealloc_dm)(struct ib_dm *dm); + struct ib_mr *(*reg_dm_mr)(struct ib_pd *pd, struct ib_dm *dm, + struct ib_dm_mr_attr *attr, + struct uverbs_attr_bundle *attrs); + /* Counters/stats operations */ + struct ib_counters *(*create_counters)( + struct ib_device *device, struct uverbs_attr_bundle *attrs); + int (*destroy_counters)(struct ib_counters *counters); + int (*read_counters)(struct ib_counters *counters, + struct ib_counters_read_attr *counters_read_attr, + struct uverbs_attr_bundle *attrs); + /** + * alloc_hw_stats - Allocate a struct rdma_hw_stats and fill in the + * driver initialized data. The struct is kfree()'ed by the sysfs + * core when the device is removed. A lifespan of -1 in the return + * struct tells the core to set a default lifespan. + */ + struct rdma_hw_stats *(*alloc_hw_stats)(struct ib_device *device, + u8 port_num); + /** + * get_hw_stats - Fill in the counter value(s) in the stats struct. + * @index - The index in the value array we wish to have updated, or + * num_counters if we want all stats updated + * Return codes - + * < 0 - Error, no counters updated + * index - Updated the single counter pointed to by index + * num_counters - Updated all counters (will reset the timestamp + * and prevent further calls for lifespan milliseconds) + * Drivers are allowed to update all counters in leiu of just the + * one given in index at their option + */ + int (*get_hw_stats)(struct ib_device *device, + struct rdma_hw_stats *stats, u8 port, int index); +}; + struct ib_device { /* Do not access @dma_device directly from ULP nor from HW drivers. */ struct device *dma_device; @@ -2639,6 +2896,8 @@ void ib_unregister_client(struct ib_client *client); void *ib_get_client_data(struct ib_device *device, struct ib_client *client); void ib_set_client_data(struct ib_device *device, struct ib_client *client, void *data); +void ib_set_device_ops(struct ib_device *device, + const struct ib_device_ops *ops); #if IS_ENABLED(CONFIG_INFINIBAND_USER_ACCESS) int rdma_user_mmap_io(struct ib_ucontext *ucontext, struct vm_area_struct *vma, From patchwork Mon Nov 5 11:35:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667869 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 83CFE15A6 for ; Mon, 5 Nov 2018 11:35:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7363D29702 for ; Mon, 5 Nov 2018 11:35:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6802329782; Mon, 5 Nov 2018 11:35:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E95AD2977F for ; Mon, 5 Nov 2018 11:35:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727814AbeKEUzG (ORCPT ); Mon, 5 Nov 2018 15:55:06 -0500 Received: from mail-wr1-f67.google.com ([209.85.221.67]:42328 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726319AbeKEUzF (ORCPT ); Mon, 5 Nov 2018 15:55:05 -0500 Received: by mail-wr1-f67.google.com with SMTP id y15-v6so9107865wru.9 for ; Mon, 05 Nov 2018 03:35:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=N5+LmMdJxOG3dGCEWAkXnbVpFHuzaN/3vdh6kcDkqwk=; b=tpwLNJLYAyay8fWyVrLCEtXEVYs+L3zyrDpF9U4cplEbt/8Xlr5XVcrFogPquqFZM4 0KiqK1RpD+bPPXHRpe1UbGx/0clPBevtSrQqad5TzMJ/U8HvHOQaAo7Fxcs9q9eul3Ik SC2Qe0KNzNuoGhDiIoh0/RXoOy9/T53XKnMf4SzJZzIH1miZK82ttgbRrz6zQF38lrNK pCYs20mvn6VTOPiNdDOqdfrQ3wiv/Zm8143EVz3WbiHTbGvm9hyMxt2jTGWnvAGTGBMK LkjLMEPMQ/O6LbLGQBCVOTMtiFnI6cKHMm3tjkB6xMViNojR5UD/qfW9S89+i94ej0Wd dbug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=N5+LmMdJxOG3dGCEWAkXnbVpFHuzaN/3vdh6kcDkqwk=; b=YNVcFq1n9sB2AyfdBAIFnzWfaOevlblulR9Ep9z/1hBzjLnkNN9Va2CrQ22i/McxVu bimcm9fZJmCXaepG02k39uN2MCUja5+poOv96XtWzvFa1eKZFo4ChcqpM7/0v8O6W5hp P1hDThzQ5nFdcf7WH0YrWicr/zayBZ1LC0SJF8dIUNUCflstaxSTcaui4YoFh/Dq6JcG +RMTGnjSpDJiSD3N/y/eOkkD+hDqccfL28XGNuOk1d9746TEhGstmzHqEiUEPvcm5zCg 7DovVmA6fmuWpqFosM28vVC0Kfou6zU+tIeC2CbZc8ot2lbwzUzF5f6+W3FgshyWeAcr rpjQ== X-Gm-Message-State: AGRZ1gJKI4i8GH8Apg6zbehP4Sp4noY8BYb+vyxGufVsXrPs9vXLDr/R HUpxaCS03UUEQ/BgnKlvl2w= X-Google-Smtp-Source: AJdET5dxTm0s8wnouIrOHQnt9JGDjTQ8ZTeexCe7frF0WNXhRQjUZExw4vFRe0Y+WYb8v6umUjunFw== X-Received: by 2002:a5d:4844:: with SMTP id n4-v6mr20047729wrs.28.1541417746626; Mon, 05 Nov 2018 03:35:46 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.35.45 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:35:46 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 02/20] RDMA/bnxt_re: Initialize ib_device_ops struct Date: Mon, 5 Nov 2018 13:35:10 +0200 Message-Id: <20181105113528.8317-3-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/bnxt_re/main.c | 107 ++++++++++++++++++----------------- 1 file changed, 56 insertions(+), 51 deletions(-) diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c index cf2282654210..6e7b3eb75fce 100644 --- a/drivers/infiniband/hw/bnxt_re/main.c +++ b/drivers/infiniband/hw/bnxt_re/main.c @@ -568,6 +568,61 @@ static void bnxt_re_unregister_ib(struct bnxt_re_dev *rdev) ib_unregister_device(&rdev->ibdev); } +static const struct ib_device_ops bnxt_re_dev_ops = { + /* Device operations */ + .query_device = bnxt_re_query_device, + .modify_device = bnxt_re_modify_device, + .get_dev_fw_str = bnxt_re_query_fw_str, + /* Port operations */ + .query_port = bnxt_re_query_port, + .get_port_immutable = bnxt_re_get_port_immutable, + .query_pkey = bnxt_re_query_pkey, + .get_netdev = bnxt_re_get_netdev, + .get_link_layer = bnxt_re_get_link_layer, + /* GID operations */ + .add_gid = bnxt_re_add_gid, + .del_gid = bnxt_re_del_gid, + /* PD operations */ + .alloc_pd = bnxt_re_alloc_pd, + .dealloc_pd = bnxt_re_dealloc_pd, + /* AH operations */ + .create_ah = bnxt_re_create_ah, + .modify_ah = bnxt_re_modify_ah, + .query_ah = bnxt_re_query_ah, + .destroy_ah = bnxt_re_destroy_ah, + /* SRQ operations */ + .create_srq = bnxt_re_create_srq, + .modify_srq = bnxt_re_modify_srq, + .query_srq = bnxt_re_query_srq, + .destroy_srq = bnxt_re_destroy_srq, + .post_srq_recv = bnxt_re_post_srq_recv, + /* QP operations */ + .create_qp = bnxt_re_create_qp, + .modify_qp = bnxt_re_modify_qp, + .query_qp = bnxt_re_query_qp, + .destroy_qp = bnxt_re_destroy_qp, + .post_send = bnxt_re_post_send, + .post_recv = bnxt_re_post_recv, + /* CQ operations */ + .create_cq = bnxt_re_create_cq, + .destroy_cq = bnxt_re_destroy_cq, + .poll_cq = bnxt_re_poll_cq, + .req_notify_cq = bnxt_re_req_notify_cq, + /* MR operations */ + .get_dma_mr = bnxt_re_get_dma_mr, + .dereg_mr = bnxt_re_dereg_mr, + .alloc_mr = bnxt_re_alloc_mr, + .map_mr_sg = bnxt_re_map_mr_sg, + .reg_user_mr = bnxt_re_reg_user_mr, + /* Ucontext operations */ + .alloc_ucontext = bnxt_re_alloc_ucontext, + .dealloc_ucontext = bnxt_re_dealloc_ucontext, + .mmap = bnxt_re_mmap, + /* Stats operations */ + .get_hw_stats = bnxt_re_ib_get_hw_stats, + .alloc_hw_stats = bnxt_re_ib_alloc_hw_stats, +}; + static int bnxt_re_register_ib(struct bnxt_re_dev *rdev) { struct ib_device *ibdev = &rdev->ibdev; @@ -614,60 +669,10 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev) (1ull << IB_USER_VERBS_CMD_DESTROY_AH); /* POLL_CQ and REQ_NOTIFY_CQ is directly handled in libbnxt_re */ - /* Kernel verbs */ - ibdev->query_device = bnxt_re_query_device; - ibdev->modify_device = bnxt_re_modify_device; - - ibdev->query_port = bnxt_re_query_port; - ibdev->get_port_immutable = bnxt_re_get_port_immutable; - ibdev->get_dev_fw_str = bnxt_re_query_fw_str; - ibdev->query_pkey = bnxt_re_query_pkey; - ibdev->get_netdev = bnxt_re_get_netdev; - ibdev->add_gid = bnxt_re_add_gid; - ibdev->del_gid = bnxt_re_del_gid; - ibdev->get_link_layer = bnxt_re_get_link_layer; - - ibdev->alloc_pd = bnxt_re_alloc_pd; - ibdev->dealloc_pd = bnxt_re_dealloc_pd; - - ibdev->create_ah = bnxt_re_create_ah; - ibdev->modify_ah = bnxt_re_modify_ah; - ibdev->query_ah = bnxt_re_query_ah; - ibdev->destroy_ah = bnxt_re_destroy_ah; - - ibdev->create_srq = bnxt_re_create_srq; - ibdev->modify_srq = bnxt_re_modify_srq; - ibdev->query_srq = bnxt_re_query_srq; - ibdev->destroy_srq = bnxt_re_destroy_srq; - ibdev->post_srq_recv = bnxt_re_post_srq_recv; - - ibdev->create_qp = bnxt_re_create_qp; - ibdev->modify_qp = bnxt_re_modify_qp; - ibdev->query_qp = bnxt_re_query_qp; - ibdev->destroy_qp = bnxt_re_destroy_qp; - - ibdev->post_send = bnxt_re_post_send; - ibdev->post_recv = bnxt_re_post_recv; - - ibdev->create_cq = bnxt_re_create_cq; - ibdev->destroy_cq = bnxt_re_destroy_cq; - ibdev->poll_cq = bnxt_re_poll_cq; - ibdev->req_notify_cq = bnxt_re_req_notify_cq; - - ibdev->get_dma_mr = bnxt_re_get_dma_mr; - ibdev->dereg_mr = bnxt_re_dereg_mr; - ibdev->alloc_mr = bnxt_re_alloc_mr; - ibdev->map_mr_sg = bnxt_re_map_mr_sg; - - ibdev->reg_user_mr = bnxt_re_reg_user_mr; - ibdev->alloc_ucontext = bnxt_re_alloc_ucontext; - ibdev->dealloc_ucontext = bnxt_re_dealloc_ucontext; - ibdev->mmap = bnxt_re_mmap; - ibdev->get_hw_stats = bnxt_re_ib_get_hw_stats; - ibdev->alloc_hw_stats = bnxt_re_ib_alloc_hw_stats; rdma_set_device_sysfs_group(ibdev, &bnxt_re_dev_attr_group); ibdev->driver_id = RDMA_DRIVER_BNXT_RE; + ib_set_device_ops(ibdev, &bnxt_re_dev_ops); return ib_register_device(ibdev, "bnxt_re%d", NULL); } From patchwork Mon Nov 5 11:35:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667871 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 50E3515A6 for ; Mon, 5 Nov 2018 11:35:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4078A29702 for ; Mon, 5 Nov 2018 11:35:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 328B029782; Mon, 5 Nov 2018 11:35:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BAFE929702 for ; Mon, 5 Nov 2018 11:35:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728094AbeKEUzH (ORCPT ); Mon, 5 Nov 2018 15:55:07 -0500 Received: from mail-wm1-f68.google.com ([209.85.128.68]:51681 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726358AbeKEUzH (ORCPT ); Mon, 5 Nov 2018 15:55:07 -0500 Received: by mail-wm1-f68.google.com with SMTP id w7-v6so7938940wmc.1 for ; Mon, 05 Nov 2018 03:35:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xMwGI4dOoooS4ewawI7RIiajEg5IJEbPsQ4Q6T3bsho=; b=Ian5Z/CCWxVgsabWALMQbC4xcVOyYUxUY3ts+x2uwiBRk6jv/im27h6lUR3iscA+B7 Jm5KZWrKR6VCatbDy6n9eiTSYzaXFQsuEaWossl8lA6qb0LK18OB+EGNDfiAQTc9N2ZA MVUMU4lhSNG2DjPOZ7VE2lkNtf9Wvs+UIvlXRz5OHCmZ/jL1bmM2P+ZrKHnqNZTDhp6G nPEadwI3g/HOodVRUfW9PGl5I406MehCRfqcNXCdTCQXejat7gfdOthMYZPsPlg+ZX4H j1IkoVrkTROKT+kbnPd0ink239l0+Ge1GSvmqNjHHN77+pzFVfnffwYG+0EleO3SRWJz 9O8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xMwGI4dOoooS4ewawI7RIiajEg5IJEbPsQ4Q6T3bsho=; b=o27wvyR5OYbZgq/wqHKCaFi3LFoU9LNYfyobeH51jFG3jt+cinM6w2swDhvrTBL4X9 mJORgolJDHFlOSUUKqf6PBaHNoAZWggZ6bLqsNd6KJpkLSu55870E7RvN3IInhmXp108 uUGnia0yvXbUrPDyp2z/WRiitiJJ1OdjR2qeDJLd2tT7j0/PyfpAzIO9ajc6GCLO6x2D hd+2qLyEnM7mv0KNlBzbCYS93HZP4MqWyHfyfrpWMbeaHrT+4741zgSX7OvfoVxm7BvS c7Bs52bWhSVlvNg6NUacjR5LgAj8BTjkuE4Av7WvfFHz+J959h1D1Sr1SgOGWYgJqxqD xF5w== X-Gm-Message-State: AGRZ1gLqskQKQUGqu4SzQiZ6Zm+fRMJxWGCM5zPq1WJWWK77E1QNUNLi iefQxlNEt6vl7umkv1Rx3Wo= X-Google-Smtp-Source: AJdET5coSl+Hdg8UBfpIeRg3mr8F2WxL4TwrPFYQf6DoMijMGYwJfNSdcHULHX02V1C1FK+ZaB8XJg== X-Received: by 2002:a1c:8414:: with SMTP id g20-v6mr5840553wmd.123.1541417747848; Mon, 05 Nov 2018 03:35:47 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.35.46 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:35:47 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 03/20] RDMA/cxgb3: Initialize ib_device_ops struct Date: Mon, 5 Nov 2018 13:35:11 +0200 Message-Id: <20181105113528.8317-4-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/cxgb3/iwch_provider.c | 75 +++++++++++++++++------------ 1 file changed, 45 insertions(+), 30 deletions(-) diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c index ebbec02cebe0..5bc5837f1ef9 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_provider.c +++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c @@ -1317,6 +1317,50 @@ static void get_dev_fw_ver_str(struct ib_device *ibdev, char *str) snprintf(str, IB_FW_VERSION_NAME_MAX, "%s", info.fw_version); } +static const struct ib_device_ops iwch_dev_ops = { + /* Device operations */ + .query_device = iwch_query_device, + .get_dev_fw_str = get_dev_fw_ver_str, + /* Port operations */ + .get_port_immutable = iwch_port_immutable, + .query_port = iwch_query_port, + /* PKey operations */ + .query_pkey = iwch_query_pkey, + /* GID operations */ + .query_gid = iwch_query_gid, + /* Ucontext operations */ + .alloc_ucontext = iwch_alloc_ucontext, + .dealloc_ucontext = iwch_dealloc_ucontext, + .mmap = iwch_mmap, + /* PD operations */ + .alloc_pd = iwch_allocate_pd, + .dealloc_pd = iwch_deallocate_pd, + /* QP operations */ + .create_qp = iwch_create_qp, + .modify_qp = iwch_ib_modify_qp, + .destroy_qp = iwch_destroy_qp, + .post_send = iwch_post_send, + .post_recv = iwch_post_receive, + /* CQ operations */ + .create_cq = iwch_create_cq, + .destroy_cq = iwch_destroy_cq, + .resize_cq = iwch_resize_cq, + .poll_cq = iwch_poll_cq, + .req_notify_cq = iwch_arm_cq, + /* MR operations */ + .get_dma_mr = iwch_get_dma_mr, + .reg_user_mr = iwch_reg_user_mr, + .dereg_mr = iwch_dereg_mr, + .map_mr_sg = iwch_map_mr_sg, + .alloc_mr = iwch_alloc_mr, + /* MW operations */ + .alloc_mw = iwch_alloc_mw, + .dealloc_mw = iwch_dealloc_mw, + /* Stats operations */ + .alloc_hw_stats = iwch_alloc_stats, + .get_hw_stats = iwch_get_mib, +}; + int iwch_register_device(struct iwch_dev *dev) { int ret; @@ -1356,37 +1400,7 @@ int iwch_register_device(struct iwch_dev *dev) dev->ibdev.phys_port_cnt = dev->rdev.port_info.nports; dev->ibdev.num_comp_vectors = 1; dev->ibdev.dev.parent = &dev->rdev.rnic_info.pdev->dev; - dev->ibdev.query_device = iwch_query_device; - dev->ibdev.query_port = iwch_query_port; - dev->ibdev.query_pkey = iwch_query_pkey; - dev->ibdev.query_gid = iwch_query_gid; - dev->ibdev.alloc_ucontext = iwch_alloc_ucontext; - dev->ibdev.dealloc_ucontext = iwch_dealloc_ucontext; - dev->ibdev.mmap = iwch_mmap; - dev->ibdev.alloc_pd = iwch_allocate_pd; - dev->ibdev.dealloc_pd = iwch_deallocate_pd; - dev->ibdev.create_qp = iwch_create_qp; - dev->ibdev.modify_qp = iwch_ib_modify_qp; - dev->ibdev.destroy_qp = iwch_destroy_qp; - dev->ibdev.create_cq = iwch_create_cq; - dev->ibdev.destroy_cq = iwch_destroy_cq; - dev->ibdev.resize_cq = iwch_resize_cq; - dev->ibdev.poll_cq = iwch_poll_cq; - dev->ibdev.get_dma_mr = iwch_get_dma_mr; - dev->ibdev.reg_user_mr = iwch_reg_user_mr; - dev->ibdev.dereg_mr = iwch_dereg_mr; - dev->ibdev.alloc_mw = iwch_alloc_mw; - dev->ibdev.dealloc_mw = iwch_dealloc_mw; - dev->ibdev.alloc_mr = iwch_alloc_mr; - dev->ibdev.map_mr_sg = iwch_map_mr_sg; - dev->ibdev.req_notify_cq = iwch_arm_cq; - dev->ibdev.post_send = iwch_post_send; - dev->ibdev.post_recv = iwch_post_receive; - dev->ibdev.alloc_hw_stats = iwch_alloc_stats; - dev->ibdev.get_hw_stats = iwch_get_mib; dev->ibdev.uverbs_abi_ver = IWCH_UVERBS_ABI_VERSION; - dev->ibdev.get_port_immutable = iwch_port_immutable; - dev->ibdev.get_dev_fw_str = get_dev_fw_ver_str; dev->ibdev.iwcm = kmalloc(sizeof(struct iw_cm_verbs), GFP_KERNEL); if (!dev->ibdev.iwcm) @@ -1405,6 +1419,7 @@ int iwch_register_device(struct iwch_dev *dev) dev->ibdev.driver_id = RDMA_DRIVER_CXGB3; rdma_set_device_sysfs_group(&dev->ibdev, &iwch_attr_group); + ib_set_device_ops(&dev->ibdev, &iwch_dev_ops); ret = ib_register_device(&dev->ibdev, "cxgb3_%d", NULL); if (ret) kfree(dev->ibdev.iwcm); From patchwork Mon Nov 5 11:35:12 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667873 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1A78C1751 for ; Mon, 5 Nov 2018 11:35:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 09C7F29702 for ; Mon, 5 Nov 2018 11:35:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F220129782; Mon, 5 Nov 2018 11:35:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 813F829702 for ; Mon, 5 Nov 2018 11:35:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728136AbeKEUzI (ORCPT ); Mon, 5 Nov 2018 15:55:08 -0500 Received: from mail-wr1-f66.google.com ([209.85.221.66]:35966 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726319AbeKEUzI (ORCPT ); Mon, 5 Nov 2018 15:55:08 -0500 Received: by mail-wr1-f66.google.com with SMTP id z13-v6so6730894wrs.3 for ; Mon, 05 Nov 2018 03:35:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=by4oIX+K0v9Ndrzl5ZtvDtVpo6rGPJomx318zbkwPhU=; b=e2sRnMKbZApu19zQ/C8EkQnt7nFDlyaNkdyeN8J8jvdB247FXeEg6/xup1SZrbbBjF f0Br+JepsozWCqwgJo+bjmnhoABUdqN6ehS3a7+Ab/XV/SAFSAUS2Arevl8Gq/piq6W5 57Q96imTmcupdPUQK18y3Hchad6HaqMsW+n0WNsR45aHw1YzfSbb/mWorQmobQFQfqgQ 1ahonH6LfpgJ7f/SOf2stkq0ByVA9y/OcYVXBak2SO7TQ9XEi6ri8oGp1QpSowjiq75y Fpd/sHGyZhyO4d4qUnQl9DrenR0GMUQRwzCi6Klyq2M9Z4ju86pUxOjuDYKbJudno2dq UCBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=by4oIX+K0v9Ndrzl5ZtvDtVpo6rGPJomx318zbkwPhU=; b=KPdSP7x6ss2ktJDYpXKF/uV1vwULRHGVvZNoNaEdb+M7BI+x40pDrs3MA+vs7/K5xJ Z2eZN6Anm36VDRA4ATEZZRovAV4eFVHaStGUfc+wbkaM/CydorzggAHuXCEKJqTGt4Yv 7TDHg404bgMcDOZb6iW3SxBM8Kq0WW/TIdIU8pbgNzAKwfRuvANax+ziKQcsu28NM9tl /ebscfhKsP1g9RsflrcTp+3XgaWpSy+/6YoOVsmD4wFyjtY6ywiuV0M+HiKgbUUjs1TF 8lFDdaVZ5JV4DfnAvF9vnPKPZ/EYmb8b3t31oRqxpTxA19CKHfEAxAIz2w9n7k0nV03F DJ+g== X-Gm-Message-State: AGRZ1gIpsQ3Hkz27uckpDat0om0E+UsiV/tbM4nnVIbyvqXm8uLOqxby LBdbHDRC7u+6lMwDGS7Qpuw= X-Google-Smtp-Source: AJdET5fvbX92CFMSOOw216klFGT6Pc0fHb+Wh3Kyp8Mm6V61dPSp/mSEEcyEYiqHr5AicZ3W1Uwm0g== X-Received: by 2002:adf:93e6:: with SMTP id 93-v6mr18025976wrp.311.1541417749200; Mon, 05 Nov 2018 03:35:49 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.35.47 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:35:48 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 04/20] RDMA/cxgb4: Initialize ib_device_ops struct Date: Mon, 5 Nov 2018 13:35:12 +0200 Message-Id: <20181105113528.8317-5-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/cxgb4/provider.c | 86 ++++++++++++++++++++-------------- 1 file changed, 51 insertions(+), 35 deletions(-) diff --git a/drivers/infiniband/hw/cxgb4/provider.c b/drivers/infiniband/hw/cxgb4/provider.c index cbb3c0ddd990..604cc03e105a 100644 --- a/drivers/infiniband/hw/cxgb4/provider.c +++ b/drivers/infiniband/hw/cxgb4/provider.c @@ -531,6 +531,56 @@ static int fill_res_entry(struct sk_buff *msg, struct rdma_restrack_entry *res) c4iw_restrack_funcs[res->type](msg, res) : 0; } +static const struct ib_device_ops c4iw_dev_ops = { + /* Device operations */ + .query_device = c4iw_query_device, + .get_dev_fw_str = get_dev_fw_str, + /* Port operations */ + .query_port = c4iw_query_port, + .get_port_immutable = c4iw_port_immutable, + .get_netdev = get_netdev, + /* PKey operations */ + .query_pkey = c4iw_query_pkey, + /* GID operations */ + .query_gid = c4iw_query_gid, + /* Ucontext operations */ + .alloc_ucontext = c4iw_alloc_ucontext, + .dealloc_ucontext = c4iw_dealloc_ucontext, + .mmap = c4iw_mmap, + /* PD operations */ + .alloc_pd = c4iw_allocate_pd, + .dealloc_pd = c4iw_deallocate_pd, + /* QP operations */ + .create_qp = c4iw_create_qp, + .modify_qp = c4iw_ib_modify_qp, + .query_qp = c4iw_ib_query_qp, + .destroy_qp = c4iw_destroy_qp, + .post_send = c4iw_post_send, + .post_recv = c4iw_post_receive, + /* SRQ operations */ + .create_srq = c4iw_create_srq, + .modify_srq = c4iw_modify_srq, + .destroy_srq = c4iw_destroy_srq, + .post_srq_recv = c4iw_post_srq_recv, + /* CQ operations */ + .create_cq = c4iw_create_cq, + .destroy_cq = c4iw_destroy_cq, + .poll_cq = c4iw_poll_cq, + .req_notify_cq = c4iw_arm_cq, + /* MR operations */ + .get_dma_mr = c4iw_get_dma_mr, + .reg_user_mr = c4iw_reg_user_mr, + .dereg_mr = c4iw_dereg_mr, + .alloc_mr = c4iw_alloc_mr, + .map_mr_sg = c4iw_map_mr_sg, + /* MW operations */ + .alloc_mw = c4iw_alloc_mw, + .dealloc_mw = c4iw_dealloc_mw, + /* Stats operations */ + .alloc_hw_stats = c4iw_alloc_stats, + .get_hw_stats = c4iw_get_mib, +}; + void c4iw_register_device(struct work_struct *work) { int ret; @@ -573,42 +623,7 @@ void c4iw_register_device(struct work_struct *work) dev->ibdev.phys_port_cnt = dev->rdev.lldi.nports; dev->ibdev.num_comp_vectors = dev->rdev.lldi.nciq; dev->ibdev.dev.parent = &dev->rdev.lldi.pdev->dev; - dev->ibdev.query_device = c4iw_query_device; - dev->ibdev.query_port = c4iw_query_port; - dev->ibdev.query_pkey = c4iw_query_pkey; - dev->ibdev.query_gid = c4iw_query_gid; - dev->ibdev.alloc_ucontext = c4iw_alloc_ucontext; - dev->ibdev.dealloc_ucontext = c4iw_dealloc_ucontext; - dev->ibdev.mmap = c4iw_mmap; - dev->ibdev.alloc_pd = c4iw_allocate_pd; - dev->ibdev.dealloc_pd = c4iw_deallocate_pd; - dev->ibdev.create_qp = c4iw_create_qp; - dev->ibdev.modify_qp = c4iw_ib_modify_qp; - dev->ibdev.query_qp = c4iw_ib_query_qp; - dev->ibdev.destroy_qp = c4iw_destroy_qp; - dev->ibdev.create_srq = c4iw_create_srq; - dev->ibdev.modify_srq = c4iw_modify_srq; - dev->ibdev.destroy_srq = c4iw_destroy_srq; - dev->ibdev.create_cq = c4iw_create_cq; - dev->ibdev.destroy_cq = c4iw_destroy_cq; - dev->ibdev.poll_cq = c4iw_poll_cq; - dev->ibdev.get_dma_mr = c4iw_get_dma_mr; - dev->ibdev.reg_user_mr = c4iw_reg_user_mr; - dev->ibdev.dereg_mr = c4iw_dereg_mr; - dev->ibdev.alloc_mw = c4iw_alloc_mw; - dev->ibdev.dealloc_mw = c4iw_dealloc_mw; - dev->ibdev.alloc_mr = c4iw_alloc_mr; - dev->ibdev.map_mr_sg = c4iw_map_mr_sg; - dev->ibdev.req_notify_cq = c4iw_arm_cq; - dev->ibdev.post_send = c4iw_post_send; - dev->ibdev.post_recv = c4iw_post_receive; - dev->ibdev.post_srq_recv = c4iw_post_srq_recv; - dev->ibdev.alloc_hw_stats = c4iw_alloc_stats; - dev->ibdev.get_hw_stats = c4iw_get_mib; dev->ibdev.uverbs_abi_ver = C4IW_UVERBS_ABI_VERSION; - dev->ibdev.get_port_immutable = c4iw_port_immutable; - dev->ibdev.get_dev_fw_str = get_dev_fw_str; - dev->ibdev.get_netdev = get_netdev; dev->ibdev.iwcm = kmalloc(sizeof(struct iw_cm_verbs), GFP_KERNEL); if (!dev->ibdev.iwcm) { @@ -630,6 +645,7 @@ void c4iw_register_device(struct work_struct *work) rdma_set_device_sysfs_group(&dev->ibdev, &c4iw_attr_group); dev->ibdev.driver_id = RDMA_DRIVER_CXGB4; + ib_set_device_ops(&dev->ibdev, &c4iw_dev_ops); ret = ib_register_device(&dev->ibdev, "cxgb4_%d", NULL); if (ret) goto err_kfree_iwcm; From patchwork Mon Nov 5 11:35:13 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667875 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D760D1751 for ; Mon, 5 Nov 2018 11:35:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C852529702 for ; Mon, 5 Nov 2018 11:35:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BCF8C29782; Mon, 5 Nov 2018 11:35:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6CE4129702 for ; Mon, 5 Nov 2018 11:35:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726319AbeKEUzJ (ORCPT ); Mon, 5 Nov 2018 15:55:09 -0500 Received: from mail-wr1-f66.google.com ([209.85.221.66]:35240 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726358AbeKEUzJ (ORCPT ); Mon, 5 Nov 2018 15:55:09 -0500 Received: by mail-wr1-f66.google.com with SMTP id z16-v6so9143785wrv.2 for ; Mon, 05 Nov 2018 03:35:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=mvJiT1jArVrQimiWQ+OILN3kOm3XuDI2EPxrb145v4w=; b=b+dKtIoCMrRXPuyW1WzBKEu6rdYkEZdzsUz50mGufn+GcJBCCGheECoeklwjrjKNc9 I2qz/qsDeHRJnpd4wNnm3q213LsiFkmn1bJQ0plgdrBCPgQcsNuIOL/7Tkf2YwmaQWlu 9xZo10ZoIb2XXPDvIvkL01cKr2+2Lq8hybhalxM/KOT6yscmC7hJ2xQJPsezeDcpIZi+ VdBLnQ4uH3xNunOfTCnd3Cz39CxKaRgg/uWOiZR6w1L8T0PJv0v5K5Gb7TnZn9eVZg5j SH3qy2ivtRigr9HLFLdlBEakTKcLPbp5DF3a8a+1WNSfpQbWxN9H67LFbUZ/BkKKZUOe l/OQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=mvJiT1jArVrQimiWQ+OILN3kOm3XuDI2EPxrb145v4w=; b=ibNTOgRHeTQ3KnER/x+NEEJfU5J4rCNl7PqgBIF6EtGIKr9EHiUK/AX9UPWlE9ee0B oK0BGMJwoXBaSLjEFSnYCU/n6rhVIQXtAGD9YwIO14BwISGIgcLevYMx8WMFYvL69DAg tB0EUjBmDnQsTcrDGGTVixJvvtVfMH31VCB9MfGjnkf9hslEFbytYheW+CS/yYdLQvCT vUvevNnurySFrrZKXagJk2Nl93IRMm+kAFmq+ufHMjvJe9BYd7pfqgbehHF8mQwQQxSy muLe3qoANw+ijuVsUbv0p5pHb8SS0E63KIlAzZLs5YTy921j4wfjtOlGa/JDGVf7g9d0 Im1A== X-Gm-Message-State: AGRZ1gLRynsB/GUn4NrSwmdvKFAO3JpW2Ez5YXY7EthrvuvuJRYW++Oq QuwaZoWFwD/7fLZjVvrsMlk= X-Google-Smtp-Source: AJdET5drkS6x76UDEUMhBXMIo4a1P59+r5F7D8MZbOblAVs9TTxiintEDyDXSBzzKpWuvGhPWP3Epg== X-Received: by 2002:a5d:4609:: with SMTP id t9-v6mr20445803wrq.198.1541417750640; Mon, 05 Nov 2018 03:35:50 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.35.49 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:35:50 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 05/20] RDMA/hfi1: Initialize ib_device_ops struct Date: Mon, 5 Nov 2018 13:35:13 +0200 Message-Id: <20181105113528.8317-6-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/hfi1/verbs.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c index 48e11e510358..3a145406bb12 100644 --- a/drivers/infiniband/hw/hfi1/verbs.c +++ b/drivers/infiniband/hw/hfi1/verbs.c @@ -1616,6 +1616,19 @@ static int get_hw_stats(struct ib_device *ibdev, struct rdma_hw_stats *stats, return count; } +static const struct ib_device_ops hfi1_dev_ops = { + /* Device operations */ + .modify_device = modify_device, + .get_dev_fw_str = hfi1_get_dev_fw_str, + /* Port operations */ + .alloc_rdma_netdev = hfi1_vnic_alloc_rn, + /* MAD operations */ + .process_mad = hfi1_process_mad, + /* Stats operations */ + .alloc_hw_stats = alloc_hw_stats, + .get_hw_stats = get_hw_stats, +}; + /** * hfi1_register_ib_device - register our device with the infiniband core * @dd: the device data structure @@ -1659,14 +1672,8 @@ int hfi1_register_ib_device(struct hfi1_devdata *dd) ibdev->owner = THIS_MODULE; ibdev->phys_port_cnt = dd->num_pports; ibdev->dev.parent = &dd->pcidev->dev; - ibdev->modify_device = modify_device; - ibdev->alloc_hw_stats = alloc_hw_stats; - ibdev->get_hw_stats = get_hw_stats; - ibdev->alloc_rdma_netdev = hfi1_vnic_alloc_rn; - - /* keep process mad in the driver */ - ibdev->process_mad = hfi1_process_mad; - ibdev->get_dev_fw_str = hfi1_get_dev_fw_str; + + ib_set_device_ops(ibdev, &hfi1_dev_ops); strlcpy(ibdev->node_desc, init_utsname()->nodename, sizeof(ibdev->node_desc)); From patchwork Mon Nov 5 11:35:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667877 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 416AE15A6 for ; Mon, 5 Nov 2018 11:35:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 311D128DB6 for ; Mon, 5 Nov 2018 11:35:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 25C7129782; Mon, 5 Nov 2018 11:35:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7BEAE28DB6 for ; Mon, 5 Nov 2018 11:35:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727558AbeKEUzL (ORCPT ); Mon, 5 Nov 2018 15:55:11 -0500 Received: from mail-wm1-f67.google.com ([209.85.128.67]:50274 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728222AbeKEUzL (ORCPT ); Mon, 5 Nov 2018 15:55:11 -0500 Received: by mail-wm1-f67.google.com with SMTP id 124-v6so3028435wmw.0 for ; Mon, 05 Nov 2018 03:35:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=nNLwcrZ5n8bqSphq7KygO2jDuDEYu0SYa8Cf8mU2bBs=; b=uXuHYUKs5wb33CIDKNOQKXK7id+0wvN9rBSD6ee0EoDTzRPJyFXMAe76txE4iqYTsB QnjXBT7vfpjIGcfScA8W+bZYNVDQSIoleoy4dUAML6eCMH5gUlD3xokFK8jHeCUbER6w f4rIhcr7RlLJzp94AXqx/+ttdjdz9PfKKWs1PQP4q+7eZJZw7bKSY0ILRHV8/pLmWF/3 U+sqS+mU0YR6KGY17JduH++Ejo+zoRS9+VaPoC/eOSrBQJXNed2t+0eZlH1EYpM4wC4D CpEHhD5nC5EVe82Hvz53rl94grDC59iLb9VlRgH014e6xota4gT1vIPQnq3pE9xRmz+I cjMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=nNLwcrZ5n8bqSphq7KygO2jDuDEYu0SYa8Cf8mU2bBs=; b=meZ/yIc8Q1id1QLk7WE0ILOp/idDANC5ffqcGWifcT1QAdJwtk/U5UoO+b8rK8XOIw 1q436pgoVDk9/f5kt23ckDd46TUFmxD5pYLiQXlW2CryErBtSkz+3FpKavRMoEtejED6 Sd22ocGElpa5bnw9D16kDvxQJSm9DL86ba7hq+MCYEKAQJgxsrFg5wg+9PmD0eJOBGjm 5lOmTnvKMhCnonR+vh7Qmhj+7rCnE/Zj3UbETuNNnLk3r1jf0qczS1qwpKYES3HPrVxB iX5KV9PWXOVhmnliJvNgZHBpBhEssw7C8ck/bVke4ZYx5RiUCKUvFMSMYYhybxXKzTq8 g/DA== X-Gm-Message-State: AGRZ1gIyy0TvPN5i6HJQ3oEKAapDja4xVoIyzst6f+qUdUeNOyGoGBoX Xh2PKCxJ6fgOL7G1NndJciE= X-Google-Smtp-Source: AJdET5e/l5meAyxUTGwXUJgbECyZ2gyl68+n/IWEJbUSk8pzPTWzX+u9W8S+kpUVbf9bjEbVIToXVg== X-Received: by 2002:a1c:8604:: with SMTP id i4-v6mr1866084wmd.23.1541417751920; Mon, 05 Nov 2018 03:35:51 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.35.50 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:35:51 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 06/20] RDMA/hns: Initialize ib_device_ops struct Date: Mon, 5 Nov 2018 13:35:14 +0200 Message-Id: <20181105113528.8317-7-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/hns/hns_roce_device.h | 1 + drivers/infiniband/hw/hns/hns_roce_hw_v1.c | 13 ++++ drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 13 ++++ drivers/infiniband/hw/hns/hns_roce_main.c | 112 +++++++++++++++------------- 4 files changed, 86 insertions(+), 53 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h index d39bdfdb5de9..01ca1550d04c 100644 --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h @@ -805,6 +805,7 @@ struct hns_roce_hw { int (*modify_cq)(struct ib_cq *cq, u16 cq_count, u16 cq_period); int (*init_eq)(struct hns_roce_dev *hr_dev); void (*cleanup_eq)(struct hns_roce_dev *hr_dev); + const struct ib_device_ops *hns_roce_dev_ops; }; struct hns_roce_dev { diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c index ca05810c92dc..79056e4e9ebf 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c @@ -4793,6 +4793,18 @@ static void hns_roce_v1_cleanup_eq_table(struct hns_roce_dev *hr_dev) kfree(eq_table->eq); } +static const struct ib_device_ops hns_roce_v1_dev_ops = { + /* QP operations */ + .query_qp = hns_roce_v1_query_qp, + .destroy_qp = hns_roce_v1_destroy_qp, + .post_send = hns_roce_v1_post_send, + .post_recv = hns_roce_v1_post_recv, + /* CQ operations */ + .modify_cq = hns_roce_v1_modify_cq, + .req_notify_cq = hns_roce_v1_req_notify_cq, + .poll_cq = hns_roce_v1_poll_cq, +}; + static const struct hns_roce_hw hns_roce_hw_v1 = { .reset = hns_roce_v1_reset, .hw_profile = hns_roce_v1_profile, @@ -4818,6 +4830,7 @@ static const struct hns_roce_hw hns_roce_hw_v1 = { .destroy_cq = hns_roce_v1_destroy_cq, .init_eq = hns_roce_v1_init_eq_table, .cleanup_eq = hns_roce_v1_cleanup_eq_table, + .hns_roce_dev_ops = &hns_roce_v1_dev_ops, }; static const struct of_device_id hns_roce_of_match[] = { diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index a4c62ae23a9a..1802113750ac 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -5340,6 +5340,18 @@ static void hns_roce_v2_cleanup_eq_table(struct hns_roce_dev *hr_dev) destroy_workqueue(hr_dev->irq_workq); } +static const struct ib_device_ops hns_roce_v2_dev_ops = { + /* QP operations */ + .query_qp = hns_roce_v2_query_qp, + .destroy_qp = hns_roce_v2_destroy_qp, + .post_send = hns_roce_v2_post_send, + .post_recv = hns_roce_v2_post_recv, + /* CQ operations */ + .modify_cq = hns_roce_v2_modify_cq, + .req_notify_cq = hns_roce_v2_req_notify_cq, + .poll_cq = hns_roce_v2_poll_cq, +}; + static const struct hns_roce_hw hns_roce_hw_v2 = { .cmq_init = hns_roce_v2_cmq_init, .cmq_exit = hns_roce_v2_cmq_exit, @@ -5367,6 +5379,7 @@ static const struct hns_roce_hw hns_roce_hw_v2 = { .poll_cq = hns_roce_v2_poll_cq, .init_eq = hns_roce_v2_init_eq_table, .cleanup_eq = hns_roce_v2_cleanup_eq_table, + .hns_roce_dev_ops = &hns_roce_v2_dev_ops, }; static const struct pci_device_id hns_roce_hw_v2_pci_tbl[] = { diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c index 1b3ee514f2ef..addbe443b130 100644 --- a/drivers/infiniband/hw/hns/hns_roce_main.c +++ b/drivers/infiniband/hw/hns/hns_roce_main.c @@ -440,6 +440,59 @@ static void hns_roce_unregister_device(struct hns_roce_dev *hr_dev) ib_unregister_device(&hr_dev->ib_dev); } +static const struct ib_device_ops hns_roce_dev_ops = { + /* Device operations */ + .modify_device = hns_roce_modify_device, + .query_device = hns_roce_query_device, + /* Port operations */ + .query_port = hns_roce_query_port, + .modify_port = hns_roce_modify_port, + .get_link_layer = hns_roce_get_link_layer, + .get_netdev = hns_roce_get_netdev, + .get_port_immutable = hns_roce_port_immutable, + /* GID operations */ + .add_gid = hns_roce_add_gid, + .del_gid = hns_roce_del_gid, + /* PKey operations */ + .query_pkey = hns_roce_query_pkey, + /* Ucontext operations */ + .alloc_ucontext = hns_roce_alloc_ucontext, + .dealloc_ucontext = hns_roce_dealloc_ucontext, + .mmap = hns_roce_mmap, + .disassociate_ucontext = hns_roce_disassociate_ucontext, + /* PD operations */ + .alloc_pd = hns_roce_alloc_pd, + .dealloc_pd = hns_roce_dealloc_pd, + /* AH operations */ + .create_ah = hns_roce_create_ah, + .query_ah = hns_roce_query_ah, + .destroy_ah = hns_roce_destroy_ah, + /* QP operations */ + .create_qp = hns_roce_create_qp, + .modify_qp = hns_roce_modify_qp, + /* CQ operations */ + .create_cq = hns_roce_ib_create_cq, + .destroy_cq = hns_roce_ib_destroy_cq, + /* MR operations */ + .get_dma_mr = hns_roce_get_dma_mr, + .reg_user_mr = hns_roce_reg_user_mr, + .dereg_mr = hns_roce_dereg_mr, +}; + +static const struct ib_device_ops hns_roce_dev_mr_ops = { + .rereg_user_mr = hns_roce_rereg_user_mr, +}; + +static const struct ib_device_ops hns_roce_dev_mw_ops = { + .alloc_mw = hns_roce_alloc_mw, + .dealloc_mw = hns_roce_dealloc_mw, +}; + +static const struct ib_device_ops hns_roce_dev_frmr_ops = { + .alloc_mr = hns_roce_alloc_mr, + .map_mr_sg = hns_roce_map_mr_sg, +}; + static int hns_roce_register_device(struct hns_roce_dev *hr_dev) { int ret; @@ -479,73 +532,26 @@ static int hns_roce_register_device(struct hns_roce_dev *hr_dev) ib_dev->uverbs_ex_cmd_mask |= (1ULL << IB_USER_VERBS_EX_CMD_MODIFY_CQ); - /* HCA||device||port */ - ib_dev->modify_device = hns_roce_modify_device; - ib_dev->query_device = hns_roce_query_device; - ib_dev->query_port = hns_roce_query_port; - ib_dev->modify_port = hns_roce_modify_port; - ib_dev->get_link_layer = hns_roce_get_link_layer; - ib_dev->get_netdev = hns_roce_get_netdev; - ib_dev->add_gid = hns_roce_add_gid; - ib_dev->del_gid = hns_roce_del_gid; - ib_dev->query_pkey = hns_roce_query_pkey; - ib_dev->alloc_ucontext = hns_roce_alloc_ucontext; - ib_dev->dealloc_ucontext = hns_roce_dealloc_ucontext; - ib_dev->mmap = hns_roce_mmap; - - /* PD */ - ib_dev->alloc_pd = hns_roce_alloc_pd; - ib_dev->dealloc_pd = hns_roce_dealloc_pd; - - /* AH */ - ib_dev->create_ah = hns_roce_create_ah; - ib_dev->query_ah = hns_roce_query_ah; - ib_dev->destroy_ah = hns_roce_destroy_ah; - - /* QP */ - ib_dev->create_qp = hns_roce_create_qp; - ib_dev->modify_qp = hns_roce_modify_qp; - ib_dev->query_qp = hr_dev->hw->query_qp; - ib_dev->destroy_qp = hr_dev->hw->destroy_qp; - ib_dev->post_send = hr_dev->hw->post_send; - ib_dev->post_recv = hr_dev->hw->post_recv; - - /* CQ */ - ib_dev->create_cq = hns_roce_ib_create_cq; - ib_dev->modify_cq = hr_dev->hw->modify_cq; - ib_dev->destroy_cq = hns_roce_ib_destroy_cq; - ib_dev->req_notify_cq = hr_dev->hw->req_notify_cq; - ib_dev->poll_cq = hr_dev->hw->poll_cq; - - /* MR */ - ib_dev->get_dma_mr = hns_roce_get_dma_mr; - ib_dev->reg_user_mr = hns_roce_reg_user_mr; - ib_dev->dereg_mr = hns_roce_dereg_mr; if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_REREG_MR) { - ib_dev->rereg_user_mr = hns_roce_rereg_user_mr; ib_dev->uverbs_cmd_mask |= (1ULL << IB_USER_VERBS_CMD_REREG_MR); + ib_set_device_ops(ib_dev, &hns_roce_dev_mr_ops); } /* MW */ if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_MW) { - ib_dev->alloc_mw = hns_roce_alloc_mw; - ib_dev->dealloc_mw = hns_roce_dealloc_mw; ib_dev->uverbs_cmd_mask |= (1ULL << IB_USER_VERBS_CMD_ALLOC_MW) | (1ULL << IB_USER_VERBS_CMD_DEALLOC_MW); + ib_set_device_ops(ib_dev, &hns_roce_dev_mw_ops); } /* FRMR */ - if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_FRMR) { - ib_dev->alloc_mr = hns_roce_alloc_mr; - ib_dev->map_mr_sg = hns_roce_map_mr_sg; - } - - /* OTHERS */ - ib_dev->get_port_immutable = hns_roce_port_immutable; - ib_dev->disassociate_ucontext = hns_roce_disassociate_ucontext; + if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_FRMR) + ib_set_device_ops(ib_dev, &hns_roce_dev_frmr_ops); ib_dev->driver_id = RDMA_DRIVER_HNS; + ib_set_device_ops(ib_dev, hr_dev->hw->hns_roce_dev_ops); + ib_set_device_ops(ib_dev, &hns_roce_dev_ops); ret = ib_register_device(ib_dev, "hns_%d", NULL); if (ret) { dev_err(dev, "ib_register_device failed!\n"); From patchwork Mon Nov 5 11:35:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667879 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 40BF21751 for ; Mon, 5 Nov 2018 11:35:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2DB3528DB6 for ; Mon, 5 Nov 2018 11:35:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2247229782; Mon, 5 Nov 2018 11:35:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A4A6928DB6 for ; Mon, 5 Nov 2018 11:35:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728222AbeKEUzM (ORCPT ); Mon, 5 Nov 2018 15:55:12 -0500 Received: from mail-wr1-f68.google.com ([209.85.221.68]:42357 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726358AbeKEUzM (ORCPT ); Mon, 5 Nov 2018 15:55:12 -0500 Received: by mail-wr1-f68.google.com with SMTP id y15-v6so9108381wru.9 for ; Mon, 05 Nov 2018 03:35:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=GzmYOzAu6ocq76cfugYkSgcHd4f7+cHwawPmXeSDPgM=; b=WqI2IED1VvjT9VCoMMg8Exyql+QImTYKI1f+ESbez5c/O3Auj4D8FBWYHkIcbe5Iqs Nd1s+zHnddZZ3+Y08GHOPQTDq28eBLonWwjFDbtomT0n+QVNYyzpvE5C68n+IbNrAQpq FdSE2345MVC1lH8udCI6akBcfc/9M+Hhd8I+J+VfC6XMtEpeL0uonk0lwaHtMDP0dUYV bbVHGHoN9xHcwzV5pVqEC6FjSqw2YDfVkJGIKD1gSko2m24DG52sMAs88ucv/WE1IPGQ RmOV3ettFE8UcH+ExOZ3R94KInDw4nU3qR35uJN2IoJieplYQ/ZormEY6MZS6lKJNOUr SiKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=GzmYOzAu6ocq76cfugYkSgcHd4f7+cHwawPmXeSDPgM=; b=qEEvcItgQz3AUTVwUU4tJArY8Hj0DbuYl/8vHIJq3jSGwPSmo9pZh98QjF7J5G9sJw xKcjyMXWFBkcSDU9slrdHGAKBBJfWhEWBQGq59Jualvbw2jckgWfTgVGMeoO1ivJ7+Q4 x/rDnv2e+61eszwRyDB2F2Scy8DM2ESaVjQsfD8fTUG4kTkQ9epX9M3WDZACYUT00Pp5 WyhBKFrgwPO7wRfJZkt1YqnRuoEGRQt6miD03H6lkEtC7g5zTZW4HYY0LsW/c420sliM an4pxKkugidozcFD2zhGDrw0LrmdIRwhMR53fnJECFfdb2LzSZbuYAS/HImPHCDdf/Vo MA7w== X-Gm-Message-State: AGRZ1gIOuCuStu2zxCqX5rgJP7HtzZVW9tlCp+WzR+/RTLZ+16Bbx5i1 NosAAqj+0P6qlb6sQ2f17xDl7WZ7 X-Google-Smtp-Source: AJdET5cCz/eOxX/SK6BZCPtQALcOgVLTS+u0AuEpnwDm4XhKJ8TPVQwB9pE47OZ06lHqdvwhio+84g== X-Received: by 2002:adf:fb4b:: with SMTP id c11-v6mr18560892wrs.117.1541417753237; Mon, 05 Nov 2018 03:35:53 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.35.52 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:35:52 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 07/20] RDMA/i40iw: Initialize ib_device_ops struct Date: Mon, 5 Nov 2018 13:35:15 +0200 Message-Id: <20181105113528.8317-8-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/i40iw/i40iw_verbs.c | 76 ++++++++++++++++++------------- 1 file changed, 45 insertions(+), 31 deletions(-) diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c index 102875872bea..ee2061298e7c 100644 --- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c +++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c @@ -2740,6 +2740,50 @@ static const struct cpumask *i40iw_get_vector_affinity(struct ib_device *ibdev, return irq_get_affinity_mask(msix_vec->irq); } +static const struct ib_device_ops i40iw_dev_ops = { + /* Device operations */ + .query_device = i40iw_query_device, + .get_dev_fw_str = i40iw_get_dev_fw_str, + .get_vector_affinity = i40iw_get_vector_affinity, + /* Port operations */ + .query_port = i40iw_query_port, + .get_port_immutable = i40iw_port_immutable, + /* PKey operations */ + .query_pkey = i40iw_query_pkey, + /* GID operations */ + .query_gid = i40iw_query_gid, + /* Ucontext operations */ + .alloc_ucontext = i40iw_alloc_ucontext, + .dealloc_ucontext = i40iw_dealloc_ucontext, + .mmap = i40iw_mmap, + /* PD operations */ + .alloc_pd = i40iw_alloc_pd, + .dealloc_pd = i40iw_dealloc_pd, + /* QP operations */ + .create_qp = i40iw_create_qp, + .modify_qp = i40iw_modify_qp, + .query_qp = i40iw_query_qp, + .destroy_qp = i40iw_destroy_qp, + .post_send = i40iw_post_send, + .post_recv = i40iw_post_recv, + .drain_sq = i40iw_drain_sq, + .drain_rq = i40iw_drain_rq, + /* CQ operations */ + .create_cq = i40iw_create_cq, + .destroy_cq = i40iw_destroy_cq, + .poll_cq = i40iw_poll_cq, + .req_notify_cq = i40iw_req_notify_cq, + /* MR operations */ + .get_dma_mr = i40iw_get_dma_mr, + .reg_user_mr = i40iw_reg_user_mr, + .dereg_mr = i40iw_dereg_mr, + .alloc_mr = i40iw_alloc_mr, + .map_mr_sg = i40iw_map_mr_sg, + /* Stats operations */ + .alloc_hw_stats = i40iw_alloc_hw_stats, + .get_hw_stats = i40iw_get_hw_stats, +}; + /** * i40iw_init_rdma_device - initialization of iwarp device * @iwdev: iwarp device @@ -2786,30 +2830,6 @@ static struct i40iw_ib_device *i40iw_init_rdma_device(struct i40iw_device *iwdev iwibdev->ibdev.phys_port_cnt = 1; iwibdev->ibdev.num_comp_vectors = iwdev->ceqs_count; iwibdev->ibdev.dev.parent = &pcidev->dev; - iwibdev->ibdev.query_port = i40iw_query_port; - iwibdev->ibdev.query_pkey = i40iw_query_pkey; - iwibdev->ibdev.query_gid = i40iw_query_gid; - iwibdev->ibdev.alloc_ucontext = i40iw_alloc_ucontext; - iwibdev->ibdev.dealloc_ucontext = i40iw_dealloc_ucontext; - iwibdev->ibdev.mmap = i40iw_mmap; - iwibdev->ibdev.alloc_pd = i40iw_alloc_pd; - iwibdev->ibdev.dealloc_pd = i40iw_dealloc_pd; - iwibdev->ibdev.create_qp = i40iw_create_qp; - iwibdev->ibdev.modify_qp = i40iw_modify_qp; - iwibdev->ibdev.query_qp = i40iw_query_qp; - iwibdev->ibdev.destroy_qp = i40iw_destroy_qp; - iwibdev->ibdev.create_cq = i40iw_create_cq; - iwibdev->ibdev.destroy_cq = i40iw_destroy_cq; - iwibdev->ibdev.get_dma_mr = i40iw_get_dma_mr; - iwibdev->ibdev.reg_user_mr = i40iw_reg_user_mr; - iwibdev->ibdev.dereg_mr = i40iw_dereg_mr; - iwibdev->ibdev.alloc_hw_stats = i40iw_alloc_hw_stats; - iwibdev->ibdev.get_hw_stats = i40iw_get_hw_stats; - iwibdev->ibdev.query_device = i40iw_query_device; - iwibdev->ibdev.drain_sq = i40iw_drain_sq; - iwibdev->ibdev.drain_rq = i40iw_drain_rq; - iwibdev->ibdev.alloc_mr = i40iw_alloc_mr; - iwibdev->ibdev.map_mr_sg = i40iw_map_mr_sg; iwibdev->ibdev.iwcm = kzalloc(sizeof(*iwibdev->ibdev.iwcm), GFP_KERNEL); if (!iwibdev->ibdev.iwcm) { ib_dealloc_device(&iwibdev->ibdev); @@ -2826,13 +2846,7 @@ static struct i40iw_ib_device *i40iw_init_rdma_device(struct i40iw_device *iwdev iwibdev->ibdev.iwcm->destroy_listen = i40iw_destroy_listen; memcpy(iwibdev->ibdev.iwcm->ifname, netdev->name, sizeof(iwibdev->ibdev.iwcm->ifname)); - iwibdev->ibdev.get_port_immutable = i40iw_port_immutable; - iwibdev->ibdev.get_dev_fw_str = i40iw_get_dev_fw_str; - iwibdev->ibdev.poll_cq = i40iw_poll_cq; - iwibdev->ibdev.req_notify_cq = i40iw_req_notify_cq; - iwibdev->ibdev.post_send = i40iw_post_send; - iwibdev->ibdev.post_recv = i40iw_post_recv; - iwibdev->ibdev.get_vector_affinity = i40iw_get_vector_affinity; + ib_set_device_ops(&iwibdev->ibdev, &i40iw_dev_ops); return iwibdev; } From patchwork Mon Nov 5 11:35:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667881 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4F41B1751 for ; Mon, 5 Nov 2018 11:35:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3CF9A28DB6 for ; Mon, 5 Nov 2018 11:35:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2DD3A29782; Mon, 5 Nov 2018 11:35:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 69CAE28DB6 for ; Mon, 5 Nov 2018 11:35:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726706AbeKEUzO (ORCPT ); Mon, 5 Nov 2018 15:55:14 -0500 Received: from mail-wm1-f68.google.com ([209.85.128.68]:33435 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727254AbeKEUzO (ORCPT ); Mon, 5 Nov 2018 15:55:14 -0500 Received: by mail-wm1-f68.google.com with SMTP id f19-v6so6182902wmb.0 for ; Mon, 05 Nov 2018 03:35:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=jB/EtDpIxhWn/khYBPERNKYwL0D8LhM0hEtc2iikCQE=; b=fPncJL4TXQQf7fHNyHL8fjbtK1Pq5FsOIl0Xpmy5J6UPNa4y/PBR5SrVbyVvFfRCrp zIpTd88KQxBaqcQdFMW0ZlXWfoT0ENPpyR1wfa6KlDYM3sAtv3MMXeTlL70YwWEiRiLi Kkli+lkL+oz3RMWpHuJdew52vtHvjTndVJ9ku78Dmb7yjzhUr6CP4rdMfhv6T/WKVBlW OaFAJ4k9chI0COOM4m2DZzbLL7iI/Pq+nS8dZMP8rC3pFv6j4l/mioPiIDxczimSiakg EYBnuiph6krQVtCk6BLMq0HCJsj5b7qAiJFTHBSFLaiXfMqZi17bu+5BDli03lXVrcF2 QOSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jB/EtDpIxhWn/khYBPERNKYwL0D8LhM0hEtc2iikCQE=; b=oX6WRZyf84QHCFBbwyodMTnl8UoLJ1CSFFknQmE5y+VyhDabvXOEu6kQSbjwhHYNjx 9Maw3LKkJD9x2xFRpQuu1IL5qg1FYj1CikpAbhTF4IuZc/iuiMHJuRhOcAKMjpLvMb2d H/66xqa0Vumfp2k+e9lfCRHKLDGX3v6VoeCcTV56AFNA0SSycomfiFgoACSUoLN/Amek 36yYPvYZxd1s38u5HbSzJ64M/jGmzhLrzlqCw2HWwGLY1GA1sT9yi0wwa8P7iGT78Wxh NaPlS+bXEeg7YBp9w/UjrotC0GwMoJmL97hDRuW9GWJ7vkvIvKW0T+niKyiOve3L2KL0 NCAw== X-Gm-Message-State: AGRZ1gLJWlnpUSKxhngJ8WN6cAkz+9R3Af1vwlNi6evrMspzxGxmNl9q WULHvROfIFwcICj8stcznZE= X-Google-Smtp-Source: AJdET5e7kR4Budd/Jicm/LoZITHcyjV+dKBfSGCBosDy52B+VFyI5um+srSpN3+CVraIhRcSshBezw== X-Received: by 2002:a1c:128c:: with SMTP id 134-v6mr4368004wms.72.1541417754564; Mon, 05 Nov 2018 03:35:54 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.35.53 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:35:54 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 08/20] RDMA/mlx4: Initialize ib_device_ops struct Date: Mon, 5 Nov 2018 13:35:16 +0200 Message-Id: <20181105113528.8317-9-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/mlx4/main.c | 191 ++++++++++++++++++++++---------------- 1 file changed, 112 insertions(+), 79 deletions(-) diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c index 0def2323459c..b74a238374fb 100644 --- a/drivers/infiniband/hw/mlx4/main.c +++ b/drivers/infiniband/hw/mlx4/main.c @@ -2220,6 +2220,11 @@ static void mlx4_ib_fill_diag_counters(struct mlx4_ib_dev *ibdev, } } +static const struct ib_device_ops mlx4_ib_hw_stats_ops = { + .get_hw_stats = mlx4_ib_get_hw_stats, + .alloc_hw_stats = mlx4_ib_alloc_hw_stats, +}; + static int mlx4_ib_alloc_diag_counters(struct mlx4_ib_dev *ibdev) { struct mlx4_ib_diag_counters *diag = ibdev->diag_counters; @@ -2246,8 +2251,7 @@ static int mlx4_ib_alloc_diag_counters(struct mlx4_ib_dev *ibdev) diag[i].offset, i); } - ibdev->ib_dev.get_hw_stats = mlx4_ib_get_hw_stats; - ibdev->ib_dev.alloc_hw_stats = mlx4_ib_alloc_hw_stats; + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_hw_stats_ops); return 0; @@ -2499,6 +2503,101 @@ static void get_fw_ver_str(struct ib_device *device, char *str) (int) dev->dev->caps.fw_ver & 0xffff); } +static const struct ib_device_ops mlx4_ib_dev_ops = { + /* Device operations */ + .query_device = mlx4_ib_query_device, + .modify_device = mlx4_ib_modify_device, + .get_dev_fw_str = get_fw_ver_str, + /* Port operations */ + .get_netdev = mlx4_ib_get_netdev, + .query_port = mlx4_ib_query_port, + .get_link_layer = mlx4_ib_port_link_layer, + .modify_port = mlx4_ib_modify_port, + .get_port_immutable = mlx4_port_immutable, + /* GID operations */ + .add_gid = mlx4_ib_add_gid, + .del_gid = mlx4_ib_del_gid, + .query_gid = mlx4_ib_query_gid, + /* PKey operations */ + .query_pkey = mlx4_ib_query_pkey, + /* Ucontext operations */ + .alloc_ucontext = mlx4_ib_alloc_ucontext, + .dealloc_ucontext = mlx4_ib_dealloc_ucontext, + .mmap = mlx4_ib_mmap, + .disassociate_ucontext = mlx4_ib_disassociate_ucontext, + /* PD operations */ + .alloc_pd = mlx4_ib_alloc_pd, + .dealloc_pd = mlx4_ib_dealloc_pd, + /* AH operations */ + .create_ah = mlx4_ib_create_ah, + .query_ah = mlx4_ib_query_ah, + .destroy_ah = mlx4_ib_destroy_ah, + /* SRQ operations */ + .create_srq = mlx4_ib_create_srq, + .modify_srq = mlx4_ib_modify_srq, + .query_srq = mlx4_ib_query_srq, + .destroy_srq = mlx4_ib_destroy_srq, + .post_srq_recv = mlx4_ib_post_srq_recv, + /* QP operations */ + .create_qp = mlx4_ib_create_qp, + .modify_qp = mlx4_ib_modify_qp, + .query_qp = mlx4_ib_query_qp, + .destroy_qp = mlx4_ib_destroy_qp, + .drain_sq = mlx4_ib_drain_sq, + .drain_rq = mlx4_ib_drain_rq, + .post_send = mlx4_ib_post_send, + .post_recv = mlx4_ib_post_recv, + /* CQ operations */ + .create_cq = mlx4_ib_create_cq, + .modify_cq = mlx4_ib_modify_cq, + .resize_cq = mlx4_ib_resize_cq, + .destroy_cq = mlx4_ib_destroy_cq, + .poll_cq = mlx4_ib_poll_cq, + .req_notify_cq = mlx4_ib_arm_cq, + /* MR operations */ + .get_dma_mr = mlx4_ib_get_dma_mr, + .reg_user_mr = mlx4_ib_reg_user_mr, + .rereg_user_mr = mlx4_ib_rereg_user_mr, + .dereg_mr = mlx4_ib_dereg_mr, + .alloc_mr = mlx4_ib_alloc_mr, + .map_mr_sg = mlx4_ib_map_mr_sg, + /* Multicast operations */ + .attach_mcast = mlx4_ib_mcg_attach, + .detach_mcast = mlx4_ib_mcg_detach, + /* MAD operations */ + .process_mad = mlx4_ib_process_mad, +}; + +static const struct ib_device_ops mlx4_ib_dev_wq_ops = { + .create_wq = mlx4_ib_create_wq, + .modify_wq = mlx4_ib_modify_wq, + .destroy_wq = mlx4_ib_destroy_wq, + .create_rwq_ind_table = mlx4_ib_create_rwq_ind_table, + .destroy_rwq_ind_table = mlx4_ib_destroy_rwq_ind_table, +}; + +static const struct ib_device_ops mlx4_ib_dev_fmr_ops = { + .alloc_fmr = mlx4_ib_fmr_alloc, + .map_phys_fmr = mlx4_ib_map_phys_fmr, + .unmap_fmr = mlx4_ib_unmap_fmr, + .dealloc_fmr = mlx4_ib_fmr_dealloc, +}; + +static const struct ib_device_ops mlx4_ib_dev_mw_ops = { + .alloc_mw = mlx4_ib_alloc_mw, + .dealloc_mw = mlx4_ib_dealloc_mw, +}; + +static const struct ib_device_ops mlx4_ib_dev_xrc_ops = { + .alloc_xrcd = mlx4_ib_alloc_xrcd, + .dealloc_xrcd = mlx4_ib_dealloc_xrcd, +}; + +static const struct ib_device_ops mlx4_ib_dev_fs_ops = { + .create_flow = mlx4_ib_create_flow, + .destroy_flow = mlx4_ib_destroy_flow, +}; + static void *mlx4_ib_add(struct mlx4_dev *dev) { struct mlx4_ib_dev *ibdev; @@ -2554,9 +2653,6 @@ static void *mlx4_ib_add(struct mlx4_dev *dev) 1 : ibdev->num_ports; ibdev->ib_dev.num_comp_vectors = dev->caps.num_comp_vectors; ibdev->ib_dev.dev.parent = &dev->persist->pdev->dev; - ibdev->ib_dev.get_netdev = mlx4_ib_get_netdev; - ibdev->ib_dev.add_gid = mlx4_ib_add_gid; - ibdev->ib_dev.del_gid = mlx4_ib_del_gid; if (dev->caps.userspace_caps) ibdev->ib_dev.uverbs_abi_ver = MLX4_IB_UVERBS_ABI_VERSION; @@ -2589,116 +2685,53 @@ static void *mlx4_ib_add(struct mlx4_dev *dev) (1ull << IB_USER_VERBS_CMD_CREATE_XSRQ) | (1ull << IB_USER_VERBS_CMD_OPEN_QP); - ibdev->ib_dev.query_device = mlx4_ib_query_device; - ibdev->ib_dev.query_port = mlx4_ib_query_port; - ibdev->ib_dev.get_link_layer = mlx4_ib_port_link_layer; - ibdev->ib_dev.query_gid = mlx4_ib_query_gid; - ibdev->ib_dev.query_pkey = mlx4_ib_query_pkey; - ibdev->ib_dev.modify_device = mlx4_ib_modify_device; - ibdev->ib_dev.modify_port = mlx4_ib_modify_port; - ibdev->ib_dev.alloc_ucontext = mlx4_ib_alloc_ucontext; - ibdev->ib_dev.dealloc_ucontext = mlx4_ib_dealloc_ucontext; - ibdev->ib_dev.mmap = mlx4_ib_mmap; - ibdev->ib_dev.alloc_pd = mlx4_ib_alloc_pd; - ibdev->ib_dev.dealloc_pd = mlx4_ib_dealloc_pd; - ibdev->ib_dev.create_ah = mlx4_ib_create_ah; - ibdev->ib_dev.query_ah = mlx4_ib_query_ah; - ibdev->ib_dev.destroy_ah = mlx4_ib_destroy_ah; - ibdev->ib_dev.create_srq = mlx4_ib_create_srq; - ibdev->ib_dev.modify_srq = mlx4_ib_modify_srq; - ibdev->ib_dev.query_srq = mlx4_ib_query_srq; - ibdev->ib_dev.destroy_srq = mlx4_ib_destroy_srq; - ibdev->ib_dev.post_srq_recv = mlx4_ib_post_srq_recv; - ibdev->ib_dev.create_qp = mlx4_ib_create_qp; - ibdev->ib_dev.modify_qp = mlx4_ib_modify_qp; - ibdev->ib_dev.query_qp = mlx4_ib_query_qp; - ibdev->ib_dev.destroy_qp = mlx4_ib_destroy_qp; - ibdev->ib_dev.drain_sq = mlx4_ib_drain_sq; - ibdev->ib_dev.drain_rq = mlx4_ib_drain_rq; - ibdev->ib_dev.post_send = mlx4_ib_post_send; - ibdev->ib_dev.post_recv = mlx4_ib_post_recv; - ibdev->ib_dev.create_cq = mlx4_ib_create_cq; - ibdev->ib_dev.modify_cq = mlx4_ib_modify_cq; - ibdev->ib_dev.resize_cq = mlx4_ib_resize_cq; - ibdev->ib_dev.destroy_cq = mlx4_ib_destroy_cq; - ibdev->ib_dev.poll_cq = mlx4_ib_poll_cq; - ibdev->ib_dev.req_notify_cq = mlx4_ib_arm_cq; - ibdev->ib_dev.get_dma_mr = mlx4_ib_get_dma_mr; - ibdev->ib_dev.reg_user_mr = mlx4_ib_reg_user_mr; - ibdev->ib_dev.rereg_user_mr = mlx4_ib_rereg_user_mr; - ibdev->ib_dev.dereg_mr = mlx4_ib_dereg_mr; - ibdev->ib_dev.alloc_mr = mlx4_ib_alloc_mr; - ibdev->ib_dev.map_mr_sg = mlx4_ib_map_mr_sg; - ibdev->ib_dev.attach_mcast = mlx4_ib_mcg_attach; - ibdev->ib_dev.detach_mcast = mlx4_ib_mcg_detach; - ibdev->ib_dev.process_mad = mlx4_ib_process_mad; - ibdev->ib_dev.get_port_immutable = mlx4_port_immutable; - ibdev->ib_dev.get_dev_fw_str = get_fw_ver_str; - ibdev->ib_dev.disassociate_ucontext = mlx4_ib_disassociate_ucontext; - + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_ops); ibdev->ib_dev.uverbs_ex_cmd_mask |= - (1ull << IB_USER_VERBS_EX_CMD_MODIFY_CQ); + (1ull << IB_USER_VERBS_EX_CMD_MODIFY_CQ) | + (1ull << IB_USER_VERBS_EX_CMD_QUERY_DEVICE) | + (1ull << IB_USER_VERBS_EX_CMD_CREATE_CQ) | + (1ull << IB_USER_VERBS_EX_CMD_CREATE_QP); if ((dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS) && ((mlx4_ib_port_link_layer(&ibdev->ib_dev, 1) == IB_LINK_LAYER_ETHERNET) || (mlx4_ib_port_link_layer(&ibdev->ib_dev, 2) == IB_LINK_LAYER_ETHERNET))) { - ibdev->ib_dev.create_wq = mlx4_ib_create_wq; - ibdev->ib_dev.modify_wq = mlx4_ib_modify_wq; - ibdev->ib_dev.destroy_wq = mlx4_ib_destroy_wq; - ibdev->ib_dev.create_rwq_ind_table = - mlx4_ib_create_rwq_ind_table; - ibdev->ib_dev.destroy_rwq_ind_table = - mlx4_ib_destroy_rwq_ind_table; ibdev->ib_dev.uverbs_ex_cmd_mask |= (1ull << IB_USER_VERBS_EX_CMD_CREATE_WQ) | (1ull << IB_USER_VERBS_EX_CMD_MODIFY_WQ) | (1ull << IB_USER_VERBS_EX_CMD_DESTROY_WQ) | (1ull << IB_USER_VERBS_EX_CMD_CREATE_RWQ_IND_TBL) | (1ull << IB_USER_VERBS_EX_CMD_DESTROY_RWQ_IND_TBL); + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_wq_ops); } - if (!mlx4_is_slave(ibdev->dev)) { - ibdev->ib_dev.alloc_fmr = mlx4_ib_fmr_alloc; - ibdev->ib_dev.map_phys_fmr = mlx4_ib_map_phys_fmr; - ibdev->ib_dev.unmap_fmr = mlx4_ib_unmap_fmr; - ibdev->ib_dev.dealloc_fmr = mlx4_ib_fmr_dealloc; - } + if (!mlx4_is_slave(ibdev->dev)) + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_fmr_ops); if (dev->caps.flags & MLX4_DEV_CAP_FLAG_MEM_WINDOW || dev->caps.bmme_flags & MLX4_BMME_FLAG_TYPE_2_WIN) { - ibdev->ib_dev.alloc_mw = mlx4_ib_alloc_mw; - ibdev->ib_dev.dealloc_mw = mlx4_ib_dealloc_mw; - ibdev->ib_dev.uverbs_cmd_mask |= (1ull << IB_USER_VERBS_CMD_ALLOC_MW) | (1ull << IB_USER_VERBS_CMD_DEALLOC_MW); + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_mw_ops); } if (dev->caps.flags & MLX4_DEV_CAP_FLAG_XRC) { - ibdev->ib_dev.alloc_xrcd = mlx4_ib_alloc_xrcd; - ibdev->ib_dev.dealloc_xrcd = mlx4_ib_dealloc_xrcd; ibdev->ib_dev.uverbs_cmd_mask |= (1ull << IB_USER_VERBS_CMD_OPEN_XRCD) | (1ull << IB_USER_VERBS_CMD_CLOSE_XRCD); + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_xrc_ops); } if (check_flow_steering_support(dev)) { ibdev->steering_support = MLX4_STEERING_MODE_DEVICE_MANAGED; - ibdev->ib_dev.create_flow = mlx4_ib_create_flow; - ibdev->ib_dev.destroy_flow = mlx4_ib_destroy_flow; - ibdev->ib_dev.uverbs_ex_cmd_mask |= (1ull << IB_USER_VERBS_EX_CMD_CREATE_FLOW) | (1ull << IB_USER_VERBS_EX_CMD_DESTROY_FLOW); + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_fs_ops); } - ibdev->ib_dev.uverbs_ex_cmd_mask |= - (1ull << IB_USER_VERBS_EX_CMD_QUERY_DEVICE) | - (1ull << IB_USER_VERBS_EX_CMD_CREATE_CQ) | - (1ull << IB_USER_VERBS_EX_CMD_CREATE_QP); - mlx4_ib_alloc_eqs(dev, ibdev); spin_lock_init(&iboe->lock); From patchwork Mon Nov 5 11:35:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667883 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2A89C15A6 for ; Mon, 5 Nov 2018 11:36:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 17E5028DB6 for ; Mon, 5 Nov 2018 11:36:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0B8E629789; Mon, 5 Nov 2018 11:36:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3B08328DB6 for ; Mon, 5 Nov 2018 11:36:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728122AbeKEUzQ (ORCPT ); Mon, 5 Nov 2018 15:55:16 -0500 Received: from mail-wr1-f66.google.com ([209.85.221.66]:46165 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726358AbeKEUzQ (ORCPT ); Mon, 5 Nov 2018 15:55:16 -0500 Received: by mail-wr1-f66.google.com with SMTP id 74-v6so9099245wrb.13 for ; Mon, 05 Nov 2018 03:35:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=f7ZpYxzLOPjzWlDDa7i844MNHkm6lag5xAJuC1f06BA=; b=Rf45gexE9qKnS7oQ2pLWvTpSfFjWyHpT+MdkRXa9jhK2Ir3CmOBHpzbu/oOnUx/A61 wfsl0Tcdslb/rGb9XrV5xzohxc8YWkkPv73cpkbLO3Nvk+xbJUIAGYoCPPCCa/Rj9Ge7 GLOc/L7Rdfb4ZqGfXkZAEOn5FzokuZtDnIf7kaC0JqjORH3TI/jK3Mnp50a8rl9ELA4p hrOzuGXUjLNKFKePVlpbEQRB1wYSgnkwBUX6TrYaqUcGY70wfoS/Ie847iYfELSw10fO Ke22YRruanhYPsdU5cu9yz7RmOIiHoK4igDg+UqSDKzHpG7EqI4+Fpzzq7lQOrQeNC3I DbdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=f7ZpYxzLOPjzWlDDa7i844MNHkm6lag5xAJuC1f06BA=; b=ewBpznHaUfo18zshbRI7XiOXg4/fAyvgI9yAwhmwcx+PpXiRnhYo+fLFi3irBB97sf mf53m+PtrK7t6lQUiij8R9MRkfm6oucuDcUYNiDcjyuIMXm46oLUXGogtUpm6DB4SYBF GZ13bh1SwB/0w/ymJq1F3KhhG7fDZh/wbIn27mK/7JA5kCnB/DG3D6rhCXY7Pri6fMFE b7AZ9m6NPgHUbZZFP/Yf6dKN1DrAIpzV+mJA6b9aIRpen1t/OVHhaVFedr0epFHQNxcV uitS8vZFBKSGnTPRWXZG0jbB1eaJmwp8EcSh4hzKZK8nDb3D4jUMm7jcNobGTVb4ohFJ /Xog== X-Gm-Message-State: AGRZ1gJ6kCiGqF8AUovQnAAAHTofXYvE1gDhMU3IZXw8lsCqQilZ+0rY Rxa6L4TjIw7Jj/A6i5b3fRkhN9PO X-Google-Smtp-Source: AJdET5cWQc3QS1xa31Xcqtn7kcOLkksqqCu5mPIj0p88vxYCc14uILl3ZMaFY3hEUKzGA6cmzQMhqw== X-Received: by 2002:adf:8b0a:: with SMTP id n10-v6mr17789853wra.282.1541417755893; Mon, 05 Nov 2018 03:35:55 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.35.54 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:35:55 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 09/20] RDMA/mlx5: Initialize ib_device_ops struct Date: Mon, 5 Nov 2018 13:35:17 +0200 Message-Id: <20181105113528.8317-10-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/mlx5/main.c | 235 +++++++++++++++++++++++--------------- 1 file changed, 142 insertions(+), 93 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index be701d40289e..c8b4b514638a 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -5764,6 +5764,107 @@ static void mlx5_ib_stage_flow_db_cleanup(struct mlx5_ib_dev *dev) kfree(dev->flow_db); } +static const struct ib_device_ops mlx5_ib_dev_ops = { + /* Device operations */ + .query_device = mlx5_ib_query_device, + .modify_device = mlx5_ib_modify_device, + .get_dev_fw_str = get_dev_fw_str, + .get_vector_affinity = mlx5_ib_get_vector_affinity, + /* Port operations */ + .modify_port = mlx5_ib_modify_port, + .get_link_layer = mlx5_ib_port_link_layer, + /* GID operations */ + .query_gid = mlx5_ib_query_gid, + .add_gid = mlx5_ib_add_gid, + .del_gid = mlx5_ib_del_gid, + /* PKey operations */ + .query_pkey = mlx5_ib_query_pkey, + /* Ucontext operations */ + .alloc_ucontext = mlx5_ib_alloc_ucontext, + .dealloc_ucontext = mlx5_ib_dealloc_ucontext, + .mmap = mlx5_ib_mmap, + .disassociate_ucontext = mlx5_ib_disassociate_ucontext, + /* PD operations */ + .alloc_pd = mlx5_ib_alloc_pd, + .dealloc_pd = mlx5_ib_dealloc_pd, + /* AH operations */ + .create_ah = mlx5_ib_create_ah, + .query_ah = mlx5_ib_query_ah, + .destroy_ah = mlx5_ib_destroy_ah, + /* SRQ operations */ + .create_srq = mlx5_ib_create_srq, + .modify_srq = mlx5_ib_modify_srq, + .query_srq = mlx5_ib_query_srq, + .destroy_srq = mlx5_ib_destroy_srq, + .post_srq_recv = mlx5_ib_post_srq_recv, + /* QP operations */ + .create_qp = mlx5_ib_create_qp, + .modify_qp = mlx5_ib_modify_qp, + .query_qp = mlx5_ib_query_qp, + .destroy_qp = mlx5_ib_destroy_qp, + .drain_sq = mlx5_ib_drain_sq, + .drain_rq = mlx5_ib_drain_rq, + .post_send = mlx5_ib_post_send, + .post_recv = mlx5_ib_post_recv, + /* CQ operations */ + .create_cq = mlx5_ib_create_cq, + .modify_cq = mlx5_ib_modify_cq, + .resize_cq = mlx5_ib_resize_cq, + .destroy_cq = mlx5_ib_destroy_cq, + .poll_cq = mlx5_ib_poll_cq, + .req_notify_cq = mlx5_ib_arm_cq, + /* MR operations */ + .get_dma_mr = mlx5_ib_get_dma_mr, + .reg_user_mr = mlx5_ib_reg_user_mr, + .rereg_user_mr = mlx5_ib_rereg_user_mr, + .dereg_mr = mlx5_ib_dereg_mr, + .alloc_mr = mlx5_ib_alloc_mr, + .map_mr_sg = mlx5_ib_map_mr_sg, + .check_mr_status = mlx5_ib_check_mr_status, + /* Multicast operations */ + .attach_mcast = mlx5_ib_mcg_attach, + .detach_mcast = mlx5_ib_mcg_detach, + /* MAD operations */ + .process_mad = mlx5_ib_process_mad, + /* Flow operations */ + .create_flow = mlx5_ib_create_flow, + .destroy_flow = mlx5_ib_destroy_flow, + .create_flow_action_esp = mlx5_ib_create_flow_action_esp, + .destroy_flow_action = mlx5_ib_destroy_flow_action, + .modify_flow_action_esp = mlx5_ib_modify_flow_action_esp, + /* Counters operations */ + .create_counters = mlx5_ib_create_counters, + .destroy_counters = mlx5_ib_destroy_counters, + .read_counters = mlx5_ib_read_counters, +}; + +static const struct ib_device_ops mlx5_ib_dev_ipoib_enhanced_ops = { + .alloc_rdma_netdev = mlx5_ib_alloc_rdma_netdev, +}; + +static const struct ib_device_ops mlx5_ib_dev_sriov_ops = { + .get_vf_config = mlx5_ib_get_vf_config, + .set_vf_link_state = mlx5_ib_set_vf_link_state, + .get_vf_stats = mlx5_ib_get_vf_stats, + .set_vf_guid = mlx5_ib_set_vf_guid, +}; + +static const struct ib_device_ops mlx5_ib_dev_mw_ops = { + .alloc_mw = mlx5_ib_alloc_mw, + .dealloc_mw = mlx5_ib_dealloc_mw, +}; + +static const struct ib_device_ops mlx5_ib_dev_xrc_ops = { + .alloc_xrcd = mlx5_ib_alloc_xrcd, + .dealloc_xrcd = mlx5_ib_dealloc_xrcd, +}; + +static const struct ib_device_ops mlx5_ib_dev_dm_ops = { + .alloc_dm = mlx5_ib_alloc_dm, + .dealloc_dm = mlx5_ib_dealloc_dm, + .reg_dm_mr = mlx5_ib_reg_dm_mr, +}; + int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev) { struct mlx5_core_dev *mdev = dev->mdev; @@ -5802,103 +5903,38 @@ int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev) (1ull << IB_USER_VERBS_EX_CMD_CREATE_CQ) | (1ull << IB_USER_VERBS_EX_CMD_CREATE_QP) | (1ull << IB_USER_VERBS_EX_CMD_MODIFY_QP) | - (1ull << IB_USER_VERBS_EX_CMD_MODIFY_CQ); - - dev->ib_dev.query_device = mlx5_ib_query_device; - dev->ib_dev.get_link_layer = mlx5_ib_port_link_layer; - dev->ib_dev.query_gid = mlx5_ib_query_gid; - dev->ib_dev.add_gid = mlx5_ib_add_gid; - dev->ib_dev.del_gid = mlx5_ib_del_gid; - dev->ib_dev.query_pkey = mlx5_ib_query_pkey; - dev->ib_dev.modify_device = mlx5_ib_modify_device; - dev->ib_dev.modify_port = mlx5_ib_modify_port; - dev->ib_dev.alloc_ucontext = mlx5_ib_alloc_ucontext; - dev->ib_dev.dealloc_ucontext = mlx5_ib_dealloc_ucontext; - dev->ib_dev.mmap = mlx5_ib_mmap; - dev->ib_dev.alloc_pd = mlx5_ib_alloc_pd; - dev->ib_dev.dealloc_pd = mlx5_ib_dealloc_pd; - dev->ib_dev.create_ah = mlx5_ib_create_ah; - dev->ib_dev.query_ah = mlx5_ib_query_ah; - dev->ib_dev.destroy_ah = mlx5_ib_destroy_ah; - dev->ib_dev.create_srq = mlx5_ib_create_srq; - dev->ib_dev.modify_srq = mlx5_ib_modify_srq; - dev->ib_dev.query_srq = mlx5_ib_query_srq; - dev->ib_dev.destroy_srq = mlx5_ib_destroy_srq; - dev->ib_dev.post_srq_recv = mlx5_ib_post_srq_recv; - dev->ib_dev.create_qp = mlx5_ib_create_qp; - dev->ib_dev.modify_qp = mlx5_ib_modify_qp; - dev->ib_dev.query_qp = mlx5_ib_query_qp; - dev->ib_dev.destroy_qp = mlx5_ib_destroy_qp; - dev->ib_dev.drain_sq = mlx5_ib_drain_sq; - dev->ib_dev.drain_rq = mlx5_ib_drain_rq; - dev->ib_dev.post_send = mlx5_ib_post_send; - dev->ib_dev.post_recv = mlx5_ib_post_recv; - dev->ib_dev.create_cq = mlx5_ib_create_cq; - dev->ib_dev.modify_cq = mlx5_ib_modify_cq; - dev->ib_dev.resize_cq = mlx5_ib_resize_cq; - dev->ib_dev.destroy_cq = mlx5_ib_destroy_cq; - dev->ib_dev.poll_cq = mlx5_ib_poll_cq; - dev->ib_dev.req_notify_cq = mlx5_ib_arm_cq; - dev->ib_dev.get_dma_mr = mlx5_ib_get_dma_mr; - dev->ib_dev.reg_user_mr = mlx5_ib_reg_user_mr; - dev->ib_dev.rereg_user_mr = mlx5_ib_rereg_user_mr; - dev->ib_dev.dereg_mr = mlx5_ib_dereg_mr; - dev->ib_dev.attach_mcast = mlx5_ib_mcg_attach; - dev->ib_dev.detach_mcast = mlx5_ib_mcg_detach; - dev->ib_dev.process_mad = mlx5_ib_process_mad; - dev->ib_dev.alloc_mr = mlx5_ib_alloc_mr; - dev->ib_dev.map_mr_sg = mlx5_ib_map_mr_sg; - dev->ib_dev.check_mr_status = mlx5_ib_check_mr_status; - dev->ib_dev.get_dev_fw_str = get_dev_fw_str; - dev->ib_dev.get_vector_affinity = mlx5_ib_get_vector_affinity; - if (MLX5_CAP_GEN(mdev, ipoib_enhanced_offloads)) - dev->ib_dev.alloc_rdma_netdev = mlx5_ib_alloc_rdma_netdev; + (1ull << IB_USER_VERBS_EX_CMD_MODIFY_CQ) | + (1ull << IB_USER_VERBS_EX_CMD_CREATE_FLOW) | + (1ull << IB_USER_VERBS_EX_CMD_DESTROY_FLOW); - if (mlx5_core_is_pf(mdev)) { - dev->ib_dev.get_vf_config = mlx5_ib_get_vf_config; - dev->ib_dev.set_vf_link_state = mlx5_ib_set_vf_link_state; - dev->ib_dev.get_vf_stats = mlx5_ib_get_vf_stats; - dev->ib_dev.set_vf_guid = mlx5_ib_set_vf_guid; - } + if (MLX5_CAP_GEN(mdev, ipoib_enhanced_offloads)) + ib_set_device_ops(&dev->ib_dev, + &mlx5_ib_dev_ipoib_enhanced_ops); - dev->ib_dev.disassociate_ucontext = mlx5_ib_disassociate_ucontext; + if (mlx5_core_is_pf(mdev)) + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_sriov_ops); dev->umr_fence = mlx5_get_umr_fence(MLX5_CAP_GEN(mdev, umr_fence)); if (MLX5_CAP_GEN(mdev, imaicl)) { - dev->ib_dev.alloc_mw = mlx5_ib_alloc_mw; - dev->ib_dev.dealloc_mw = mlx5_ib_dealloc_mw; dev->ib_dev.uverbs_cmd_mask |= (1ull << IB_USER_VERBS_CMD_ALLOC_MW) | (1ull << IB_USER_VERBS_CMD_DEALLOC_MW); + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_mw_ops); } if (MLX5_CAP_GEN(mdev, xrc)) { - dev->ib_dev.alloc_xrcd = mlx5_ib_alloc_xrcd; - dev->ib_dev.dealloc_xrcd = mlx5_ib_dealloc_xrcd; dev->ib_dev.uverbs_cmd_mask |= (1ull << IB_USER_VERBS_CMD_OPEN_XRCD) | (1ull << IB_USER_VERBS_CMD_CLOSE_XRCD); + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_xrc_ops); } - if (MLX5_CAP_DEV_MEM(mdev, memic)) { - dev->ib_dev.alloc_dm = mlx5_ib_alloc_dm; - dev->ib_dev.dealloc_dm = mlx5_ib_dealloc_dm; - dev->ib_dev.reg_dm_mr = mlx5_ib_reg_dm_mr; - } + if (MLX5_CAP_DEV_MEM(mdev, memic)) + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_dm_ops); - dev->ib_dev.create_flow = mlx5_ib_create_flow; - dev->ib_dev.destroy_flow = mlx5_ib_destroy_flow; - dev->ib_dev.uverbs_ex_cmd_mask |= - (1ull << IB_USER_VERBS_EX_CMD_CREATE_FLOW) | - (1ull << IB_USER_VERBS_EX_CMD_DESTROY_FLOW); - dev->ib_dev.create_flow_action_esp = mlx5_ib_create_flow_action_esp; - dev->ib_dev.destroy_flow_action = mlx5_ib_destroy_flow_action; - dev->ib_dev.modify_flow_action_esp = mlx5_ib_modify_flow_action_esp; dev->ib_dev.driver_id = RDMA_DRIVER_MLX5; - dev->ib_dev.create_counters = mlx5_ib_create_counters; - dev->ib_dev.destroy_counters = mlx5_ib_destroy_counters; - dev->ib_dev.read_counters = mlx5_ib_read_counters; + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_ops); err = init_node_data(dev); if (err) @@ -5912,22 +5948,37 @@ int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev) return 0; } +static const struct ib_device_ops mlx5_ib_dev_port_ops = { + .get_port_immutable = mlx5_port_immutable, + .query_port = mlx5_ib_query_port, +}; + static int mlx5_ib_stage_non_default_cb(struct mlx5_ib_dev *dev) { - dev->ib_dev.get_port_immutable = mlx5_port_immutable; - dev->ib_dev.query_port = mlx5_ib_query_port; - + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_port_ops); return 0; } +static const struct ib_device_ops mlx5_ib_dev_port_rep_ops = { + .get_port_immutable = mlx5_port_rep_immutable, + .query_port = mlx5_ib_rep_query_port, +}; + int mlx5_ib_stage_rep_non_default_cb(struct mlx5_ib_dev *dev) { - dev->ib_dev.get_port_immutable = mlx5_port_rep_immutable; - dev->ib_dev.query_port = mlx5_ib_rep_query_port; - + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_port_rep_ops); return 0; } +static const struct ib_device_ops mlx5_ib_dev_common_roce_ops = { + .get_netdev = mlx5_ib_get_netdev, + .create_wq = mlx5_ib_create_wq, + .modify_wq = mlx5_ib_modify_wq, + .destroy_wq = mlx5_ib_destroy_wq, + .create_rwq_ind_table = mlx5_ib_create_rwq_ind_table, + .destroy_rwq_ind_table = mlx5_ib_destroy_rwq_ind_table, +}; + static int mlx5_ib_stage_common_roce_init(struct mlx5_ib_dev *dev) { u8 port_num; @@ -5939,19 +5990,13 @@ static int mlx5_ib_stage_common_roce_init(struct mlx5_ib_dev *dev) dev->roce[i].last_port_state = IB_PORT_DOWN; } - dev->ib_dev.get_netdev = mlx5_ib_get_netdev; - dev->ib_dev.create_wq = mlx5_ib_create_wq; - dev->ib_dev.modify_wq = mlx5_ib_modify_wq; - dev->ib_dev.destroy_wq = mlx5_ib_destroy_wq; - dev->ib_dev.create_rwq_ind_table = mlx5_ib_create_rwq_ind_table; - dev->ib_dev.destroy_rwq_ind_table = mlx5_ib_destroy_rwq_ind_table; - dev->ib_dev.uverbs_ex_cmd_mask |= (1ull << IB_USER_VERBS_EX_CMD_CREATE_WQ) | (1ull << IB_USER_VERBS_EX_CMD_MODIFY_WQ) | (1ull << IB_USER_VERBS_EX_CMD_DESTROY_WQ) | (1ull << IB_USER_VERBS_EX_CMD_CREATE_RWQ_IND_TBL) | (1ull << IB_USER_VERBS_EX_CMD_DESTROY_RWQ_IND_TBL); + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_common_roce_ops); port_num = mlx5_core_native_port_num(dev->mdev) - 1; @@ -6045,11 +6090,15 @@ static int mlx5_ib_stage_odp_init(struct mlx5_ib_dev *dev) return mlx5_ib_odp_init_one(dev); } +static const struct ib_device_ops mlx5_ib_dev_hw_stats_ops = { + .get_hw_stats = mlx5_ib_get_hw_stats, + .alloc_hw_stats = mlx5_ib_alloc_hw_stats, +}; + int mlx5_ib_stage_counters_init(struct mlx5_ib_dev *dev) { if (MLX5_CAP_GEN(dev->mdev, max_qp_cnt)) { - dev->ib_dev.get_hw_stats = mlx5_ib_get_hw_stats; - dev->ib_dev.alloc_hw_stats = mlx5_ib_alloc_hw_stats; + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_hw_stats_ops); return mlx5_ib_alloc_counters(dev); } From patchwork Mon Nov 5 11:35:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667885 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6C68017D4 for ; Mon, 5 Nov 2018 11:36:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5505028DB6 for ; Mon, 5 Nov 2018 11:36:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4922829782; Mon, 5 Nov 2018 11:36:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AD0222977F for ; Mon, 5 Nov 2018 11:36:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726358AbeKEUzQ (ORCPT ); Mon, 5 Nov 2018 15:55:16 -0500 Received: from mail-wr1-f66.google.com ([209.85.221.66]:40379 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727254AbeKEUzQ (ORCPT ); Mon, 5 Nov 2018 15:55:16 -0500 Received: by mail-wr1-f66.google.com with SMTP id i17-v6so9112462wre.7 for ; Mon, 05 Nov 2018 03:35:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=qruYC9cOJDBnV/FHL0qgcQcDWVRuNIvbkRsz/oBhkWY=; b=nksIwqaPwn6n2Np3Izl15kyvMyWPbzebn/goDmVZSsqXeN2DwDnY0glsXte0l2xoBt Lcp9iO0IqHy4Br9yNWRZEaTAlSEiLaUmAXdnbe7bDDR316YwctgzHOueHf4NKDp2cyxK Bf9R9X6d7jNSyvpYZlIXaAp0OTQnBbgmD577RXn1uQasfc7RZjoQopujyCIYWzzEtur9 0p+dxQtS+IqukBokDxcgk4amhNQ71HVblrqprNG6NcXzOZfOp1/PTcONWtSthA2ui+ik epcsPmMgcUBTxsja9PsNkXjdv1LVVNyvBejzxFX0ep2PcvLhhipRnEKWcFTOMD3Jrm1M UZgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=qruYC9cOJDBnV/FHL0qgcQcDWVRuNIvbkRsz/oBhkWY=; b=VpS+0i12xzQuR5P91La7HJ0wgwxbig+opTaH9jcildb9p+b8iSw9TYqtqBpAPdmu2m f5BA1mN7LuTTjPw3Lii7oVI/6Eb2NVLVIRtuQU52i6/Q1IvL5P512TSKuxDiFUb/9hjA u5A2iaHc9FE0nIJToaU2ecaviBDjSv+3JMfkLfR/heCL0WRd9AeQLjnCCGWgUM2DvfcS AzFlkn/+G4IbPkZk6aFjzlxsAhYWl5BzkB3Q3DmDvBC1dzurt2NXDJ2dOmZg1I33/tnO QqKcwzlz44McPuDk6wWyZqtdS/AnOvOcqZhIBSo/ASJqayFiLu/lpwZFDmpcxuIwzOC1 73Kw== X-Gm-Message-State: AGRZ1gLimuUth5yTZr6Str2tq1lx07Q7flUHJAKd45wqfiwbfWtCtiQo 61MhVGJ3d27jimxYkZNqL8U= X-Google-Smtp-Source: AJdET5dQu3aQZnbtQ+2lSsgNhkrv8JXNr+Yua1T77RhKz3OLS0hY2lNBZX8tRxQqvU6nRx2Qy0lpXw== X-Received: by 2002:adf:e14b:: with SMTP id f11-v6mr18381580wri.42.1541417757238; Mon, 05 Nov 2018 03:35:57 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.35.55 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:35:56 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 10/20] RDMA/mthca: Initialize ib_device_ops struct Date: Mon, 5 Nov 2018 13:35:18 +0200 Message-Id: <20181105113528.8317-11-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/mthca/mthca_provider.c | 151 ++++++++++++++++++--------- 1 file changed, 100 insertions(+), 51 deletions(-) diff --git a/drivers/infiniband/hw/mthca/mthca_provider.c b/drivers/infiniband/hw/mthca/mthca_provider.c index 691c6f048938..6ebb9d1b551a 100644 --- a/drivers/infiniband/hw/mthca/mthca_provider.c +++ b/drivers/infiniband/hw/mthca/mthca_provider.c @@ -1193,6 +1193,93 @@ static void get_dev_fw_str(struct ib_device *device, char *str) (int) dev->fw_ver & 0xffff); } +static const struct ib_device_ops mthca_dev_ops = { + /* Device operations */ + .query_device = mthca_query_device, + .modify_device = mthca_modify_device, + .get_dev_fw_str = get_dev_fw_str, + /* Port operations */ + .query_port = mthca_query_port, + .modify_port = mthca_modify_port, + .get_port_immutable = mthca_port_immutable, + /* PKey operations */ + .query_pkey = mthca_query_pkey, + /* GID operations */ + .query_gid = mthca_query_gid, + /* Ucontext operations */ + .alloc_ucontext = mthca_alloc_ucontext, + .dealloc_ucontext = mthca_dealloc_ucontext, + .mmap = mthca_mmap_uar, + /* PD operations */ + .alloc_pd = mthca_alloc_pd, + .dealloc_pd = mthca_dealloc_pd, + /* AH operations */ + .create_ah = mthca_ah_create, + .query_ah = mthca_ah_query, + .destroy_ah = mthca_ah_destroy, + /* QP operations */ + .create_qp = mthca_create_qp, + .modify_qp = mthca_modify_qp, + .query_qp = mthca_query_qp, + .destroy_qp = mthca_destroy_qp, + /* CQ operations */ + .create_cq = mthca_create_cq, + .resize_cq = mthca_resize_cq, + .destroy_cq = mthca_destroy_cq, + .poll_cq = mthca_poll_cq, + /* MR operations */ + .get_dma_mr = mthca_get_dma_mr, + .reg_user_mr = mthca_reg_user_mr, + .dereg_mr = mthca_dereg_mr, + /* Multicast operations */ + .attach_mcast = mthca_multicast_attach, + .detach_mcast = mthca_multicast_detach, + /* MAD operations */ + .process_mad = mthca_process_mad, +}; + +static const struct ib_device_ops mthca_dev_arbel_srq_ops = { + .create_srq = mthca_create_srq, + .modify_srq = mthca_modify_srq, + .query_srq = mthca_query_srq, + .destroy_srq = mthca_destroy_srq, + .post_srq_recv = mthca_arbel_post_srq_recv, +}; + +static const struct ib_device_ops mthca_dev_tavor_srq_ops = { + .create_srq = mthca_create_srq, + .modify_srq = mthca_modify_srq, + .query_srq = mthca_query_srq, + .destroy_srq = mthca_destroy_srq, + .post_srq_recv = mthca_tavor_post_srq_recv, +}; + +static const struct ib_device_ops mthca_dev_arbel_fmr_ops = { + .alloc_fmr = mthca_alloc_fmr, + .unmap_fmr = mthca_unmap_fmr, + .dealloc_fmr = mthca_dealloc_fmr, + .map_phys_fmr = mthca_arbel_map_phys_fmr, +}; + +static const struct ib_device_ops mthca_dev_tavor_fmr_ops = { + .alloc_fmr = mthca_alloc_fmr, + .unmap_fmr = mthca_unmap_fmr, + .dealloc_fmr = mthca_dealloc_fmr, + .map_phys_fmr = mthca_tavor_map_phys_fmr, +}; + +static const struct ib_device_ops mthca_dev_arbel_ops = { + .req_notify_cq = mthca_arbel_arm_cq, + .post_send = mthca_arbel_post_send, + .post_recv = mthca_arbel_post_receive, +}; + +static const struct ib_device_ops mthca_dev_tavor_ops = { + .req_notify_cq = mthca_tavor_arm_cq, + .post_send = mthca_tavor_post_send, + .post_recv = mthca_tavor_post_receive, +}; + int mthca_register_device(struct mthca_dev *dev) { int ret; @@ -1226,26 +1313,8 @@ int mthca_register_device(struct mthca_dev *dev) dev->ib_dev.phys_port_cnt = dev->limits.num_ports; dev->ib_dev.num_comp_vectors = 1; dev->ib_dev.dev.parent = &dev->pdev->dev; - dev->ib_dev.query_device = mthca_query_device; - dev->ib_dev.query_port = mthca_query_port; - dev->ib_dev.modify_device = mthca_modify_device; - dev->ib_dev.modify_port = mthca_modify_port; - dev->ib_dev.query_pkey = mthca_query_pkey; - dev->ib_dev.query_gid = mthca_query_gid; - dev->ib_dev.alloc_ucontext = mthca_alloc_ucontext; - dev->ib_dev.dealloc_ucontext = mthca_dealloc_ucontext; - dev->ib_dev.mmap = mthca_mmap_uar; - dev->ib_dev.alloc_pd = mthca_alloc_pd; - dev->ib_dev.dealloc_pd = mthca_dealloc_pd; - dev->ib_dev.create_ah = mthca_ah_create; - dev->ib_dev.query_ah = mthca_ah_query; - dev->ib_dev.destroy_ah = mthca_ah_destroy; if (dev->mthca_flags & MTHCA_FLAG_SRQ) { - dev->ib_dev.create_srq = mthca_create_srq; - dev->ib_dev.modify_srq = mthca_modify_srq; - dev->ib_dev.query_srq = mthca_query_srq; - dev->ib_dev.destroy_srq = mthca_destroy_srq; dev->ib_dev.uverbs_cmd_mask |= (1ull << IB_USER_VERBS_CMD_CREATE_SRQ) | (1ull << IB_USER_VERBS_CMD_MODIFY_SRQ) | @@ -1253,48 +1322,28 @@ int mthca_register_device(struct mthca_dev *dev) (1ull << IB_USER_VERBS_CMD_DESTROY_SRQ); if (mthca_is_memfree(dev)) - dev->ib_dev.post_srq_recv = mthca_arbel_post_srq_recv; + ib_set_device_ops(&dev->ib_dev, + &mthca_dev_arbel_srq_ops); else - dev->ib_dev.post_srq_recv = mthca_tavor_post_srq_recv; + ib_set_device_ops(&dev->ib_dev, + &mthca_dev_tavor_srq_ops); } - dev->ib_dev.create_qp = mthca_create_qp; - dev->ib_dev.modify_qp = mthca_modify_qp; - dev->ib_dev.query_qp = mthca_query_qp; - dev->ib_dev.destroy_qp = mthca_destroy_qp; - dev->ib_dev.create_cq = mthca_create_cq; - dev->ib_dev.resize_cq = mthca_resize_cq; - dev->ib_dev.destroy_cq = mthca_destroy_cq; - dev->ib_dev.poll_cq = mthca_poll_cq; - dev->ib_dev.get_dma_mr = mthca_get_dma_mr; - dev->ib_dev.reg_user_mr = mthca_reg_user_mr; - dev->ib_dev.dereg_mr = mthca_dereg_mr; - dev->ib_dev.get_port_immutable = mthca_port_immutable; - dev->ib_dev.get_dev_fw_str = get_dev_fw_str; - if (dev->mthca_flags & MTHCA_FLAG_FMR) { - dev->ib_dev.alloc_fmr = mthca_alloc_fmr; - dev->ib_dev.unmap_fmr = mthca_unmap_fmr; - dev->ib_dev.dealloc_fmr = mthca_dealloc_fmr; if (mthca_is_memfree(dev)) - dev->ib_dev.map_phys_fmr = mthca_arbel_map_phys_fmr; + ib_set_device_ops(&dev->ib_dev, + &mthca_dev_arbel_fmr_ops); else - dev->ib_dev.map_phys_fmr = mthca_tavor_map_phys_fmr; + ib_set_device_ops(&dev->ib_dev, + &mthca_dev_tavor_fmr_ops); } - dev->ib_dev.attach_mcast = mthca_multicast_attach; - dev->ib_dev.detach_mcast = mthca_multicast_detach; - dev->ib_dev.process_mad = mthca_process_mad; + ib_set_device_ops(&dev->ib_dev, &mthca_dev_ops); - if (mthca_is_memfree(dev)) { - dev->ib_dev.req_notify_cq = mthca_arbel_arm_cq; - dev->ib_dev.post_send = mthca_arbel_post_send; - dev->ib_dev.post_recv = mthca_arbel_post_receive; - } else { - dev->ib_dev.req_notify_cq = mthca_tavor_arm_cq; - dev->ib_dev.post_send = mthca_tavor_post_send; - dev->ib_dev.post_recv = mthca_tavor_post_receive; - } + if (mthca_is_memfree(dev)) + ib_set_device_ops(&dev->ib_dev, &mthca_dev_arbel_ops); + else + ib_set_device_ops(&dev->ib_dev, &mthca_dev_tavor_ops); mutex_init(&dev->cap_mask_mutex); From patchwork Mon Nov 5 11:35:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667887 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B61D71751 for ; Mon, 5 Nov 2018 11:36:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A487D28DB6 for ; Mon, 5 Nov 2018 11:36:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9646029789; Mon, 5 Nov 2018 11:36:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 13ECD28DB6 for ; Mon, 5 Nov 2018 11:36:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727820AbeKEUzS (ORCPT ); Mon, 5 Nov 2018 15:55:18 -0500 Received: from mail-wr1-f67.google.com ([209.85.221.67]:35267 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727040AbeKEUzS (ORCPT ); Mon, 5 Nov 2018 15:55:18 -0500 Received: by mail-wr1-f67.google.com with SMTP id z16-v6so9144338wrv.2 for ; Mon, 05 Nov 2018 03:35:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Hv58lm4S6KHfSC7Z3+C2hOu8rLB2zF43xb6bHdsRrDk=; b=pfSrsgmqOAmz2Nekk1DfstEQHMdNFMVWHdmlLymoJ38wVwQQ5GfSm52HySfQ6xmygj kKIwcf7BUIpiiqqV+EECYU7tZe1gFtpOmgD77TPSBlEOdsjYxra/2CI1tckveH+PDu6W o4UIEXk8KUcIuUy4Wirp1dnk5qUTC/J+B0pvGI7vwsUzh3XRQYDpxWXbNEUsLqfdMwXn DRYIlsA8kguKKJczEhVagSldqXA5Alcf1gb3J4ZrUusAXpksyt01S/52kKSO1iGNJBJJ W/iBfXHsTtm+lpwFbpr05i+p3hkIUxK76oK2oX8Nzg4B1YkN6f797dky5lZrXcOc7pee OJew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Hv58lm4S6KHfSC7Z3+C2hOu8rLB2zF43xb6bHdsRrDk=; b=WBA7MmrAH9w04Xi8j64tFtrAJUJ/WhDni6S45gBAQRvcW/hheHSGeos2Iu62Ju5aKe PAq3PA5takvGpfdhrIrVvlEEK5kzzLxRuFP25UDjOVHx7rZYy/Y2u2BZi9X6IavPeJGU WkLHN0xjj7DiCeDL1YrvADADpiVXXArpgFeATJMf8AyYJbgRBuTaHRYd7mYMtRgnMT/N wPMpUtEvixMccOoiylEIOAmQpNyK7aPZM192voy04+92yePgC6Y7nCZ4ZNVtOwu+fPxz 9/2ndYeW5zedWYMs+h9ciDZ1HarP8Mr6UBGttA5mwoA1upc63RZc2C8fbuVRgNAzvuU7 /BNA== X-Gm-Message-State: AGRZ1gLDMtZKDN8UxDZ1ruCFHfNH2Yk2EmXzFxjVzq/NWIOqIEntapdo kCRMJPnfgMnxzITrpTqOTuQ= X-Google-Smtp-Source: AJdET5dmBCayQQN95cyfqVBFCYFJ6OTJ+XTY5utOcUONSVLpauC0NuFJPvWV7sFFV5rRHcgPokwX5Q== X-Received: by 2002:adf:fbc6:: with SMTP id d6-v6mr15428330wrs.241.1541417758464; Mon, 05 Nov 2018 03:35:58 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.35.57 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:35:58 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 11/20] RDMA/nes: Initialize ib_device_ops struct Date: Mon, 5 Nov 2018 13:35:19 +0200 Message-Id: <20181105113528.8317-12-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/nes/nes_verbs.c | 77 ++++++++++++++++++++--------------- 1 file changed, 45 insertions(+), 32 deletions(-) diff --git a/drivers/infiniband/hw/nes/nes_verbs.c b/drivers/infiniband/hw/nes/nes_verbs.c index 92d1cadd4cfd..92fe8788ccfc 100644 --- a/drivers/infiniband/hw/nes/nes_verbs.c +++ b/drivers/infiniband/hw/nes/nes_verbs.c @@ -3627,6 +3627,49 @@ static void get_dev_fw_str(struct ib_device *dev, char *str) (nesvnic->nesdev->nesadapter->firmware_version & 0x000000ff)); } +static const struct ib_device_ops nes_dev_ops = { + /* Device operations */ + .query_device = nes_query_device, + .get_dev_fw_str = get_dev_fw_str, + /* Port operations */ + .query_port = nes_query_port, + .get_port_immutable = nes_port_immutable, + /* PKey operations */ + .query_pkey = nes_query_pkey, + /* GID operations */ + .query_gid = nes_query_gid, + /* Ucontext operations */ + .alloc_ucontext = nes_alloc_ucontext, + .dealloc_ucontext = nes_dealloc_ucontext, + .mmap = nes_mmap, + /* PD operations */ + .alloc_pd = nes_alloc_pd, + .dealloc_pd = nes_dealloc_pd, + /* QP operations */ + .create_qp = nes_create_qp, + .modify_qp = nes_modify_qp, + .query_qp = nes_query_qp, + .destroy_qp = nes_destroy_qp, + .post_send = nes_post_send, + .post_recv = nes_post_recv, + .drain_sq = nes_drain_sq, + .drain_rq = nes_drain_rq, + /* CQ operations */ + .create_cq = nes_create_cq, + .destroy_cq = nes_destroy_cq, + .poll_cq = nes_poll_cq, + .req_notify_cq = nes_req_notify_cq, + /* MR operations */ + .get_dma_mr = nes_get_dma_mr, + .reg_user_mr = nes_reg_user_mr, + .dereg_mr = nes_dereg_mr, + .alloc_mr = nes_alloc_mr, + .map_mr_sg = nes_map_mr_sg, + /* MW operations */ + .alloc_mw = nes_alloc_mw, + .dealloc_mw = nes_dealloc_mw, +}; + /** * nes_init_ofa_device */ @@ -3673,36 +3716,6 @@ struct nes_ib_device *nes_init_ofa_device(struct net_device *netdev) nesibdev->ibdev.phys_port_cnt = 1; nesibdev->ibdev.num_comp_vectors = 1; nesibdev->ibdev.dev.parent = &nesdev->pcidev->dev; - nesibdev->ibdev.query_device = nes_query_device; - nesibdev->ibdev.query_port = nes_query_port; - nesibdev->ibdev.query_pkey = nes_query_pkey; - nesibdev->ibdev.query_gid = nes_query_gid; - nesibdev->ibdev.alloc_ucontext = nes_alloc_ucontext; - nesibdev->ibdev.dealloc_ucontext = nes_dealloc_ucontext; - nesibdev->ibdev.mmap = nes_mmap; - nesibdev->ibdev.alloc_pd = nes_alloc_pd; - nesibdev->ibdev.dealloc_pd = nes_dealloc_pd; - nesibdev->ibdev.create_qp = nes_create_qp; - nesibdev->ibdev.modify_qp = nes_modify_qp; - nesibdev->ibdev.query_qp = nes_query_qp; - nesibdev->ibdev.destroy_qp = nes_destroy_qp; - nesibdev->ibdev.create_cq = nes_create_cq; - nesibdev->ibdev.destroy_cq = nes_destroy_cq; - nesibdev->ibdev.poll_cq = nes_poll_cq; - nesibdev->ibdev.get_dma_mr = nes_get_dma_mr; - nesibdev->ibdev.reg_user_mr = nes_reg_user_mr; - nesibdev->ibdev.dereg_mr = nes_dereg_mr; - nesibdev->ibdev.alloc_mw = nes_alloc_mw; - nesibdev->ibdev.dealloc_mw = nes_dealloc_mw; - - nesibdev->ibdev.alloc_mr = nes_alloc_mr; - nesibdev->ibdev.map_mr_sg = nes_map_mr_sg; - - nesibdev->ibdev.req_notify_cq = nes_req_notify_cq; - nesibdev->ibdev.post_send = nes_post_send; - nesibdev->ibdev.post_recv = nes_post_recv; - nesibdev->ibdev.drain_sq = nes_drain_sq; - nesibdev->ibdev.drain_rq = nes_drain_rq; nesibdev->ibdev.iwcm = kzalloc(sizeof(*nesibdev->ibdev.iwcm), GFP_KERNEL); if (nesibdev->ibdev.iwcm == NULL) { @@ -3717,8 +3730,8 @@ struct nes_ib_device *nes_init_ofa_device(struct net_device *netdev) nesibdev->ibdev.iwcm->reject = nes_reject; nesibdev->ibdev.iwcm->create_listen = nes_create_listen; nesibdev->ibdev.iwcm->destroy_listen = nes_destroy_listen; - nesibdev->ibdev.get_port_immutable = nes_port_immutable; - nesibdev->ibdev.get_dev_fw_str = get_dev_fw_str; + + ib_set_device_ops(&nesibdev->ibdev, &nes_dev_ops); memcpy(nesibdev->ibdev.iwcm->ifname, netdev->name, sizeof(nesibdev->ibdev.iwcm->ifname)); From patchwork Mon Nov 5 11:35:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667889 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5405F1751 for ; Mon, 5 Nov 2018 11:36:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4408928DB6 for ; Mon, 5 Nov 2018 11:36:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3866229789; Mon, 5 Nov 2018 11:36:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B2C5528DB6 for ; Mon, 5 Nov 2018 11:36:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727040AbeKEUzT (ORCPT ); Mon, 5 Nov 2018 15:55:19 -0500 Received: from mail-wm1-f68.google.com ([209.85.128.68]:54346 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727254AbeKEUzT (ORCPT ); Mon, 5 Nov 2018 15:55:19 -0500 Received: by mail-wm1-f68.google.com with SMTP id r63-v6so7971878wma.4 for ; Mon, 05 Nov 2018 03:36:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=24PJiZerf6fsgbfkXBmjpaUTa/R8Hi6q+t1lRBcVqrY=; b=jYSpOVVg79vwfuUO5q0yOiSSFwwDJxuNObvppyOiNDCkcpaD7QsAqCcSGUgaaxuUrO 8NdPQeHQ7hziXNhmeZfLUzLxLdKqetiPTgoJqVhTemY9yUIz0wdQVAu13wAZic5cAaIr U3oa8Gilge9kOBuwV+E3GJUxiWAtPUb0zXJpPMch5nlxU6UbEEqcLwZBS0n8Rg52NwbV ZfuCGyNVYZQZHFOSd+Lo+Ob42C6cPJGwNsqw2NBUg0/KzXXrX8bbspjJ+jkwn/oh10fS ftgmYx9vP/RxfGeLuSBlhc2OqXvHFYPGqbKzxogiB56w8gVjyHfHhd5NnyIkANxwc/hx 69EQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=24PJiZerf6fsgbfkXBmjpaUTa/R8Hi6q+t1lRBcVqrY=; b=FgSOhvrt7LjlTUB07Vn0MSTdfGYmNwGg3pDyC0gon9lx3TLw3YLCPqeRq8TuaDuDez 1TyDuFr0hM7OGHkWSNjjGpBf4Na7onBRASC6+gPADD1S/Ov8maG4MT/wT+XVbCNnSS6a ypkN2UVMi90U7tJvwojLVqGEN3HNpgsCrPmPiG8+IBm6az9LAsDyyFWXBLaS6bUdOGvr xBX3TGKPI1msbnG3Hxj+hSNTQ2F1JVA5y/NGQfB86wJ3liNNt5TvxErc8H0XMqEvsv44 LnpW6SQSe5isY+eYT2UF7y98inCas/n1OCr1xexz+NTXhjtNuTVZmN+XQj8Qq5UsPykU SMQg== X-Gm-Message-State: AGRZ1gKPauIMvPnT/rxHu/4U8cAX2BjbYvASmPLv+PLMM2BwbQWTZHt3 q1aIOA/7/oFxNlvz6I8rquQ= X-Google-Smtp-Source: AJdET5cifJbyDZuvROiUBuKyGAuESpLYpooWw5uCLPEgzlzcPXVYP2w12BmkRlD/lW3qGo3wTfV23w== X-Received: by 2002:a1c:2345:: with SMTP id j66-v6mr5818914wmj.79.1541417759843; Mon, 05 Nov 2018 03:35:59 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.35.58 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:35:59 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 12/20] RDMA/ocrdma: Initialize ib_device_ops struct Date: Mon, 5 Nov 2018 13:35:20 +0200 Message-Id: <20181105113528.8317-13-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/ocrdma/ocrdma_main.c | 102 ++++++++++++++++------------- 1 file changed, 56 insertions(+), 46 deletions(-) diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_main.c b/drivers/infiniband/hw/ocrdma/ocrdma_main.c index 873cc7f6fe61..a9ee7d1b2ca5 100644 --- a/drivers/infiniband/hw/ocrdma/ocrdma_main.c +++ b/drivers/infiniband/hw/ocrdma/ocrdma_main.c @@ -143,6 +143,60 @@ static const struct attribute_group ocrdma_attr_group = { .attrs = ocrdma_attributes, }; +static const struct ib_device_ops ocrdma_dev_ops = { + /* Device operations */ + .query_device = ocrdma_query_device, + .get_dev_fw_str = get_dev_fw_str, + /* Port operations */ + .query_port = ocrdma_query_port, + .modify_port = ocrdma_modify_port, + .get_netdev = ocrdma_get_netdev, + .get_link_layer = ocrdma_link_layer, + .get_port_immutable = ocrdma_port_immutable, + /* PD operations */ + .alloc_pd = ocrdma_alloc_pd, + .dealloc_pd = ocrdma_dealloc_pd, + /* CQ operations */ + .create_cq = ocrdma_create_cq, + .destroy_cq = ocrdma_destroy_cq, + .resize_cq = ocrdma_resize_cq, + .poll_cq = ocrdma_poll_cq, + .req_notify_cq = ocrdma_arm_cq, + /* QP operations */ + .create_qp = ocrdma_create_qp, + .modify_qp = ocrdma_modify_qp, + .query_qp = ocrdma_query_qp, + .destroy_qp = ocrdma_destroy_qp, + .post_send = ocrdma_post_send, + .post_recv = ocrdma_post_recv, + /* PKey operations */ + .query_pkey = ocrdma_query_pkey, + /* AH operations */ + .create_ah = ocrdma_create_ah, + .destroy_ah = ocrdma_destroy_ah, + .query_ah = ocrdma_query_ah, + /* MR operations */ + .get_dma_mr = ocrdma_get_dma_mr, + .dereg_mr = ocrdma_dereg_mr, + .reg_user_mr = ocrdma_reg_user_mr, + .alloc_mr = ocrdma_alloc_mr, + .map_mr_sg = ocrdma_map_mr_sg, + /* Ucontext operations */ + .alloc_ucontext = ocrdma_alloc_ucontext, + .dealloc_ucontext = ocrdma_dealloc_ucontext, + .mmap = ocrdma_mmap, + /* MAD operations */ + .process_mad = ocrdma_process_mad, +}; + +static const struct ib_device_ops ocrdma_dev_srq_ops = { + .create_srq = ocrdma_create_srq, + .modify_srq = ocrdma_modify_srq, + .query_srq = ocrdma_query_srq, + .destroy_srq = ocrdma_destroy_srq, + .post_srq_recv = ocrdma_post_srq_recv, +}; + static int ocrdma_register_device(struct ocrdma_dev *dev) { ocrdma_get_guid(dev, (u8 *)&dev->ibdev.node_guid); @@ -182,50 +236,10 @@ static int ocrdma_register_device(struct ocrdma_dev *dev) dev->ibdev.phys_port_cnt = 1; dev->ibdev.num_comp_vectors = dev->eq_cnt; - /* mandatory verbs. */ - dev->ibdev.query_device = ocrdma_query_device; - dev->ibdev.query_port = ocrdma_query_port; - dev->ibdev.modify_port = ocrdma_modify_port; - dev->ibdev.get_netdev = ocrdma_get_netdev; - dev->ibdev.get_link_layer = ocrdma_link_layer; - dev->ibdev.alloc_pd = ocrdma_alloc_pd; - dev->ibdev.dealloc_pd = ocrdma_dealloc_pd; - - dev->ibdev.create_cq = ocrdma_create_cq; - dev->ibdev.destroy_cq = ocrdma_destroy_cq; - dev->ibdev.resize_cq = ocrdma_resize_cq; - - dev->ibdev.create_qp = ocrdma_create_qp; - dev->ibdev.modify_qp = ocrdma_modify_qp; - dev->ibdev.query_qp = ocrdma_query_qp; - dev->ibdev.destroy_qp = ocrdma_destroy_qp; - - dev->ibdev.query_pkey = ocrdma_query_pkey; - dev->ibdev.create_ah = ocrdma_create_ah; - dev->ibdev.destroy_ah = ocrdma_destroy_ah; - dev->ibdev.query_ah = ocrdma_query_ah; - - dev->ibdev.poll_cq = ocrdma_poll_cq; - dev->ibdev.post_send = ocrdma_post_send; - dev->ibdev.post_recv = ocrdma_post_recv; - dev->ibdev.req_notify_cq = ocrdma_arm_cq; - - dev->ibdev.get_dma_mr = ocrdma_get_dma_mr; - dev->ibdev.dereg_mr = ocrdma_dereg_mr; - dev->ibdev.reg_user_mr = ocrdma_reg_user_mr; - - dev->ibdev.alloc_mr = ocrdma_alloc_mr; - dev->ibdev.map_mr_sg = ocrdma_map_mr_sg; - /* mandatory to support user space verbs consumer. */ - dev->ibdev.alloc_ucontext = ocrdma_alloc_ucontext; - dev->ibdev.dealloc_ucontext = ocrdma_dealloc_ucontext; - dev->ibdev.mmap = ocrdma_mmap; dev->ibdev.dev.parent = &dev->nic_info.pdev->dev; - dev->ibdev.process_mad = ocrdma_process_mad; - dev->ibdev.get_port_immutable = ocrdma_port_immutable; - dev->ibdev.get_dev_fw_str = get_dev_fw_str; + ib_set_device_ops(&dev->ibdev, &ocrdma_dev_ops); if (ocrdma_get_asic_type(dev) == OCRDMA_ASIC_GEN_SKH_R) { dev->ibdev.uverbs_cmd_mask |= @@ -235,11 +249,7 @@ static int ocrdma_register_device(struct ocrdma_dev *dev) OCRDMA_UVERBS(DESTROY_SRQ) | OCRDMA_UVERBS(POST_SRQ_RECV); - dev->ibdev.create_srq = ocrdma_create_srq; - dev->ibdev.modify_srq = ocrdma_modify_srq; - dev->ibdev.query_srq = ocrdma_query_srq; - dev->ibdev.destroy_srq = ocrdma_destroy_srq; - dev->ibdev.post_srq_recv = ocrdma_post_srq_recv; + ib_set_device_ops(&dev->ibdev, &ocrdma_dev_srq_ops); } rdma_set_device_sysfs_group(&dev->ibdev, &ocrdma_attr_group); dev->ibdev.driver_id = RDMA_DRIVER_OCRDMA; From patchwork Mon Nov 5 11:35:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667891 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 35CDE15A6 for ; Mon, 5 Nov 2018 11:36:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 23F7628DB6 for ; Mon, 5 Nov 2018 11:36:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 17D2129789; Mon, 5 Nov 2018 11:36:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8CA9428DB6 for ; Mon, 5 Nov 2018 11:36:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728215AbeKEUzU (ORCPT ); Mon, 5 Nov 2018 15:55:20 -0500 Received: from mail-wr1-f65.google.com ([209.85.221.65]:33267 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728245AbeKEUzU (ORCPT ); Mon, 5 Nov 2018 15:55:20 -0500 Received: by mail-wr1-f65.google.com with SMTP id u1-v6so9142571wrn.0 for ; Mon, 05 Nov 2018 03:36:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Nv3Bo7szsUBBJIr4eV579jM+pJpXIMbJzJySYUbVf+c=; b=uBQF2TRoz/GHUHi6gbyN1qxq0k7QKz1VbkWsjNRnvG4PRgg2cSo20gjPvnzIA4DIn9 iG6gtSbD9P/SgHy5HDGnt9sLOsJPHFl3iaVBcryOZI5xiuBfQdzAU4EkeelvtaP7iVdi QhuerjPhMOnx7Oj1uB6DBtbRpi/FcG51qgpmv19J3IT8cRXbeyWJgAwCbHnuQy8l/qLt HTHr6daO76ky11mutN4s8iRFTSU5NXk0nANcCT1/KGTPJyNScm3e8E8+M3tchLtSo5Im Lz6f4o4aCpKOMWd0zNQZ6tcbRSnIDGceiti+Gimf6oDLTWvxKsEgCDn2ggC4aDtNu5+2 Vv4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Nv3Bo7szsUBBJIr4eV579jM+pJpXIMbJzJySYUbVf+c=; b=oVc4/ra6SM0H6MwuPeA3o4UnmFje1q6uXh8X1mL4FMBTKPZd5/Vb06xg/ENVUgYttb iat/HUvKBVxWoWWzbGt00CknfD/JmRH99D7NQ8B3Ey9tHruY13XShBxjYpT1KtDYLPXU FTW6R/H2i5Z/hCRObPkze/Br8MzQX/d/I4H14kkidXkQZGl7mFgaSXKuc31AxXwZ5oej LmsLPecaCxYUYmRjBLVhx0R45jrCCosuRI4w2irP9fLBaTsk/V4nE97F3P5E9FOybvbF qnZHO3k4pI9ReVXgrXi8brJCkn70WsWKIzVvwj1Obu0CVnoYg+k7sRMwO2D5UzKgmE3y En9g== X-Gm-Message-State: AGRZ1gIEpirh2qDnZ5F5avs0GviGQkNJsBDMwgdsaC3gducNWA80pqTC rBVTUoQZXYP5F8c7PGFgE7w= X-Google-Smtp-Source: AJdET5dcu+qJtKX2c2pIfYHOr3sQcI/Qpi5+3Y7OcqpgnjDO/Ns29Ab7bvKW18d2IxE5HP1qZhuH9Q== X-Received: by 2002:adf:a492:: with SMTP id g18-v6mr20331376wrb.167.1541417761101; Mon, 05 Nov 2018 03:36:01 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.35.59 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:36:00 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 13/20] RDMA/qedr: Initialize ib_device_ops struct Date: Mon, 5 Nov 2018 13:35:21 +0200 Message-Id: <20181105113528.8317-14-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/qedr/main.c | 114 +++++++++++++++++++++----------------- 1 file changed, 63 insertions(+), 51 deletions(-) diff --git a/drivers/infiniband/hw/qedr/main.c b/drivers/infiniband/hw/qedr/main.c index 8d6ff9df49fe..137b0bedb2db 100644 --- a/drivers/infiniband/hw/qedr/main.c +++ b/drivers/infiniband/hw/qedr/main.c @@ -160,12 +160,16 @@ static const struct attribute_group qedr_attr_group = { .attrs = qedr_attributes, }; +static const struct ib_device_ops qedr_iw_dev_ops = { + .query_gid = qedr_iw_query_gid, + .get_port_immutable = qedr_iw_port_immutable, +}; + static int qedr_iw_register_device(struct qedr_dev *dev) { dev->ibdev.node_type = RDMA_NODE_RNIC; - dev->ibdev.query_gid = qedr_iw_query_gid; - dev->ibdev.get_port_immutable = qedr_iw_port_immutable; + ib_set_device_ops(&dev->ibdev, &qedr_iw_dev_ops); dev->ibdev.iwcm = kzalloc(sizeof(*dev->ibdev.iwcm), GFP_KERNEL); if (!dev->ibdev.iwcm) @@ -186,13 +190,67 @@ static int qedr_iw_register_device(struct qedr_dev *dev) return 0; } +static const struct ib_device_ops qedr_roce_dev_ops = { + .get_port_immutable = qedr_roce_port_immutable, +}; + static void qedr_roce_register_device(struct qedr_dev *dev) { dev->ibdev.node_type = RDMA_NODE_IB_CA; - dev->ibdev.get_port_immutable = qedr_roce_port_immutable; + ib_set_device_ops(&dev->ibdev, &qedr_roce_dev_ops); } +static const struct ib_device_ops qedr_dev_ops = { + /* Device operations */ + .query_device = qedr_query_device, + .get_dev_fw_str = qedr_get_dev_fw_str, + /* Port operations */ + .query_port = qedr_query_port, + .modify_port = qedr_modify_port, + .get_netdev = qedr_get_netdev, + .get_link_layer = qedr_link_layer, + /* Ucontext operations */ + .alloc_ucontext = qedr_alloc_ucontext, + .dealloc_ucontext = qedr_dealloc_ucontext, + .mmap = qedr_mmap, + /* PD operations */ + .alloc_pd = qedr_alloc_pd, + .dealloc_pd = qedr_dealloc_pd, + /* CQ operations */ + .create_cq = qedr_create_cq, + .destroy_cq = qedr_destroy_cq, + .resize_cq = qedr_resize_cq, + .req_notify_cq = qedr_arm_cq, + .poll_cq = qedr_poll_cq, + /* QP operations */ + .create_qp = qedr_create_qp, + .modify_qp = qedr_modify_qp, + .query_qp = qedr_query_qp, + .destroy_qp = qedr_destroy_qp, + .post_send = qedr_post_send, + .post_recv = qedr_post_recv, + /* SRQ operations */ + .create_srq = qedr_create_srq, + .destroy_srq = qedr_destroy_srq, + .modify_srq = qedr_modify_srq, + .query_srq = qedr_query_srq, + .post_srq_recv = qedr_post_srq_recv, + /* PKey operations */ + .query_pkey = qedr_query_pkey, + /* AH operations */ + .create_ah = qedr_create_ah, + .destroy_ah = qedr_destroy_ah, + /* MR operations */ + .get_dma_mr = qedr_get_dma_mr, + .dereg_mr = qedr_dereg_mr, + .reg_user_mr = qedr_reg_user_mr, + .alloc_mr = qedr_alloc_mr, + .map_mr_sg = qedr_map_mr_sg, + /* MAD operations */ + .process_mad = qedr_process_mad, +}; + static int qedr_register_device(struct qedr_dev *dev) { int rc; @@ -237,57 +295,11 @@ static int qedr_register_device(struct qedr_dev *dev) dev->ibdev.phys_port_cnt = 1; dev->ibdev.num_comp_vectors = dev->num_cnq; - - dev->ibdev.query_device = qedr_query_device; - dev->ibdev.query_port = qedr_query_port; - dev->ibdev.modify_port = qedr_modify_port; - - dev->ibdev.alloc_ucontext = qedr_alloc_ucontext; - dev->ibdev.dealloc_ucontext = qedr_dealloc_ucontext; - dev->ibdev.mmap = qedr_mmap; - - dev->ibdev.alloc_pd = qedr_alloc_pd; - dev->ibdev.dealloc_pd = qedr_dealloc_pd; - - dev->ibdev.create_cq = qedr_create_cq; - dev->ibdev.destroy_cq = qedr_destroy_cq; - dev->ibdev.resize_cq = qedr_resize_cq; - dev->ibdev.req_notify_cq = qedr_arm_cq; - - dev->ibdev.create_qp = qedr_create_qp; - dev->ibdev.modify_qp = qedr_modify_qp; - dev->ibdev.query_qp = qedr_query_qp; - dev->ibdev.destroy_qp = qedr_destroy_qp; - - dev->ibdev.create_srq = qedr_create_srq; - dev->ibdev.destroy_srq = qedr_destroy_srq; - dev->ibdev.modify_srq = qedr_modify_srq; - dev->ibdev.query_srq = qedr_query_srq; - dev->ibdev.post_srq_recv = qedr_post_srq_recv; - dev->ibdev.query_pkey = qedr_query_pkey; - - dev->ibdev.create_ah = qedr_create_ah; - dev->ibdev.destroy_ah = qedr_destroy_ah; - - dev->ibdev.get_dma_mr = qedr_get_dma_mr; - dev->ibdev.dereg_mr = qedr_dereg_mr; - dev->ibdev.reg_user_mr = qedr_reg_user_mr; - dev->ibdev.alloc_mr = qedr_alloc_mr; - dev->ibdev.map_mr_sg = qedr_map_mr_sg; - - dev->ibdev.poll_cq = qedr_poll_cq; - dev->ibdev.post_send = qedr_post_send; - dev->ibdev.post_recv = qedr_post_recv; - - dev->ibdev.process_mad = qedr_process_mad; - - dev->ibdev.get_netdev = qedr_get_netdev; - dev->ibdev.dev.parent = &dev->pdev->dev; - dev->ibdev.get_link_layer = qedr_link_layer; - dev->ibdev.get_dev_fw_str = qedr_get_dev_fw_str; rdma_set_device_sysfs_group(&dev->ibdev, &qedr_attr_group); + ib_set_device_ops(&dev->ibdev, &qedr_dev_ops); + dev->ibdev.driver_id = RDMA_DRIVER_QEDR; return ib_register_device(&dev->ibdev, "qedr%d", NULL); } From patchwork Mon Nov 5 11:35:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667893 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C9D221751 for ; Mon, 5 Nov 2018 11:36:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B8C9628DB6 for ; Mon, 5 Nov 2018 11:36:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ACED829789; Mon, 5 Nov 2018 11:36:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 554D728DB6 for ; Mon, 5 Nov 2018 11:36:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728718AbeKEUzV (ORCPT ); Mon, 5 Nov 2018 15:55:21 -0500 Received: from mail-wr1-f67.google.com ([209.85.221.67]:38112 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727254AbeKEUzV (ORCPT ); Mon, 5 Nov 2018 15:55:21 -0500 Received: by mail-wr1-f67.google.com with SMTP id d10-v6so9120703wrs.5 for ; Mon, 05 Nov 2018 03:36:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=0cJKLZR37DzMrRyhQV9FACq55OFKrKS11Nc3efO0DyY=; b=BS3ypvv0EXDXgwzSA3gq+JRyMwNZtU1oSlKkrUA3qgsxcVZK0ej5XK4UEDOwv4u/qf 0lIxesyjoP+vsDN0UAbpm5FoCsRRzf+nU/YusRpPU3+Cy3f7MRnbU8BCyePASZA16wOD ggO1vLo2Wx9vsRzAmLfZN7C5sdhZUCf9U9VnYK0xhN68oZCn2ZKTD0u+HR8q+77l4ekr 6KG9p2oNS7Xj22wFVdiIZOp9pw2XezZp/+mImRECyp2FsZ9V5UVz5roMlZa1u4fZDTzU 4Aq6shjys8CDl2oJkH4p1Qw6/VGoNdAhdiN7gFddBjXzQIlotEKQSmIZUExQMIldUjCW 0qUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=0cJKLZR37DzMrRyhQV9FACq55OFKrKS11Nc3efO0DyY=; b=X7+3416omeIDKEybPxFncONq15LcPU4mcExnrVrwP76CHvy99ToylJzdGXeYRTf42Z VCyJIJPVQXq9cf+GGMwg5rscRUUiA86lANaM4cSAlNEUPsmXhsCRaCcno12ZwdXUJMEk Yi88SvttFxv3KZv+ygispumN7YzfQ6korY9Ac8+tR2AZs+HEEuAAaAi/UdP/6FWZ7W1D K2o3MdDBvo5HJwRmpy6s0aL2pZ8YFoLkWVYkvWsZ72yxNSxkyiWefo7QAfw+fkwRPhFx E8cqTBST5wM7KJNptdwe/kgQWbm9EwJnNMMT6qrOq8nJ/UKWEiUGhGHgI+DhT5lZPJrk grRA== X-Gm-Message-State: AGRZ1gLdHGWjfqFRSs+8C+ttY3ViqlVXRAEjp2HJ73C4Dk1jNk+aQIht KqTfjdAH05sLItraJAbQvF4= X-Google-Smtp-Source: AJdET5dQRan9t8H8mWc9SUywCMr0YnaugomnSaQEdbYluqrmuJ81ETSjauRvEM2Sh1lsZ1ypQ/rvxQ== X-Received: by 2002:a5d:5146:: with SMTP id u6-v6mr19225053wrt.299.1541417762354; Mon, 05 Nov 2018 03:36:02 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.36.01 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:36:01 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 14/20] RDMA/qib: Initialize ib_device_ops struct Date: Mon, 5 Nov 2018 13:35:22 +0200 Message-Id: <20181105113528.8317-15-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/qib/qib_verbs.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/hw/qib/qib_verbs.c b/drivers/infiniband/hw/qib/qib_verbs.c index 4b0f5761a646..b756ab613a9a 100644 --- a/drivers/infiniband/hw/qib/qib_verbs.c +++ b/drivers/infiniband/hw/qib/qib_verbs.c @@ -1496,6 +1496,13 @@ static void qib_fill_device_attr(struct qib_devdata *dd) dd->verbs_dev.rdi.wc_opcode = ib_qib_wc_opcode; } +static const struct ib_device_ops qib_dev_ops = { + /* Device operations */ + .modify_device = qib_modify_device, + /* MAD operations */ + .process_mad = qib_process_mad, +}; + /** * qib_register_ib_device - register our device with the infiniband core * @dd: the device data structure @@ -1558,8 +1565,6 @@ int qib_register_ib_device(struct qib_devdata *dd) ibdev->node_guid = ppd->guid; ibdev->phys_port_cnt = dd->num_pports; ibdev->dev.parent = &dd->pcidev->dev; - ibdev->modify_device = qib_modify_device; - ibdev->process_mad = qib_process_mad; snprintf(ibdev->node_desc, sizeof(ibdev->node_desc), "Intel Infiniband HCA %s", init_utsname()->nodename); @@ -1627,6 +1632,7 @@ int qib_register_ib_device(struct qib_devdata *dd) } rdma_set_device_sysfs_group(&dd->verbs_dev.rdi.ibdev, &qib_attr_group); + ib_set_device_ops(ibdev, &qib_dev_ops); ret = rvt_register_device(&dd->verbs_dev.rdi, RDMA_DRIVER_QIB); if (ret) goto err_tx; From patchwork Mon Nov 5 11:35:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667895 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D7EE815A6 for ; Mon, 5 Nov 2018 11:36:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C70BF28DB6 for ; Mon, 5 Nov 2018 11:36:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B99CD29789; Mon, 5 Nov 2018 11:36:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4C7CD28DB6 for ; Mon, 5 Nov 2018 11:36:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727254AbeKEUzW (ORCPT ); Mon, 5 Nov 2018 15:55:22 -0500 Received: from mail-wm1-f68.google.com ([209.85.128.68]:33450 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728245AbeKEUzW (ORCPT ); Mon, 5 Nov 2018 15:55:22 -0500 Received: by mail-wm1-f68.google.com with SMTP id f19-v6so6183201wmb.0 for ; Mon, 05 Nov 2018 03:36:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=sLhblML80PBywa984eWyr9qdpvHVjojg1Z4DNMQb7Pc=; b=dRN/TAnrMUmnGQqLvFj65pUE+vcz0KgI5oV9HA6e4YcraRnAytN332P7OpN/VAx+1T +wRTHXREGEkVoDhMLIC2TO3xrZEc2c5VJaQFSujmkGxh+P2fasx4kHeWZxes5MYCa4vP MdSi8RW1gMXIIGFWbnShuOiTMJxxcOUHx9TBuku0pmLFP4XVH4JWLSKHUe+ES1GK8rmU MLHuef8c95fTIzji0wxFGVn5amq5A5RpWqRkxnJ+Xb70qYo5k2wkP3daKE/nlzNsvwlb DMjXU8Z8mwaRTNzVDbO5gL3h8AR4AgfJ5W1DfOG7gno2Nl392kKAtpPjkEqCS7fPJmg4 NQRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=sLhblML80PBywa984eWyr9qdpvHVjojg1Z4DNMQb7Pc=; b=NPy/vk92AaegHS6TVltrge3FZ1FmajQNI++8QRXGBm2k7ZiG9Hvvu8n0U8/W4pt1mC HJKmrz260KHN+YPto/yQhN9evz9J3azPTAH9X3jXokFPI0CsQq3iTiIbd/ZvOvIuNhye 58RclG+XI4EKXLIcrrwp+rctOLa76HmWDLqHDuXqBDk9H9Kk9Df16W6Q9imIUqfCT1tG TnDS6QaiBp2iO4ya3uw39gOx8QA2CNDFCOC2fifcDEcaDO9nChMJLSaMUTQQlMn0Pgs9 57P86Od4KmiwphY8dMVt0Z4Ci1lj3E5OiWjM6JGjhK561V7pool88HL7pWm3DPIdYREA rPIw== X-Gm-Message-State: AGRZ1gL5hBqlmP0iO9HDPSFd+dpedbnF61+/FKeeUHndT4xKloP95Al4 kk/OqWWHDMxW/gP6ZLU6yig= X-Google-Smtp-Source: AJdET5f6UrNGDAO5PUIhXzsvNQCsBi1pFdZBGV5XF1sMi1CMF678xXJkc5VcTRhB1pAb7gwq1e+ycA== X-Received: by 2002:a1c:3d03:: with SMTP id k3-v6mr6259825wma.90.1541417763697; Mon, 05 Nov 2018 03:36:03 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.36.02 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:36:03 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 15/20] RDMA/usnic: Initialize ib_device_ops struct Date: Mon, 5 Nov 2018 13:35:23 +0200 Message-Id: <20181105113528.8317-16-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/usnic/usnic_ib_main.c | 71 +++++++++++++++++------------ 1 file changed, 42 insertions(+), 29 deletions(-) diff --git a/drivers/infiniband/hw/usnic/usnic_ib_main.c b/drivers/infiniband/hw/usnic/usnic_ib_main.c index 73bd00f8d2c8..e369072c0d89 100644 --- a/drivers/infiniband/hw/usnic/usnic_ib_main.c +++ b/drivers/infiniband/hw/usnic/usnic_ib_main.c @@ -330,6 +330,47 @@ static void usnic_get_dev_fw_str(struct ib_device *device, char *str) snprintf(str, IB_FW_VERSION_NAME_MAX, "%s", info.fw_version); } +static const struct ib_device_ops usnic_dev_ops = { + /* Device operations */ + .query_device = usnic_ib_query_device, + .get_dev_fw_str = usnic_get_dev_fw_str, + /* Port operations */ + .query_port = usnic_ib_query_port, + .get_netdev = usnic_get_netdev, + .get_link_layer = usnic_ib_port_link_layer, + .get_port_immutable = usnic_port_immutable, + /* PKey operations */ + .query_pkey = usnic_ib_query_pkey, + /* GID operations */ + .query_gid = usnic_ib_query_gid, + /* PD operations */ + .alloc_pd = usnic_ib_alloc_pd, + .dealloc_pd = usnic_ib_dealloc_pd, + /* QP operations */ + .create_qp = usnic_ib_create_qp, + .modify_qp = usnic_ib_modify_qp, + .query_qp = usnic_ib_query_qp, + .destroy_qp = usnic_ib_destroy_qp, + .post_send = usnic_ib_post_send, + .post_recv = usnic_ib_post_recv, + /* CQ operations */ + .create_cq = usnic_ib_create_cq, + .destroy_cq = usnic_ib_destroy_cq, + .poll_cq = usnic_ib_poll_cq, + .req_notify_cq = usnic_ib_req_notify_cq, + /* MR operations */ + .reg_user_mr = usnic_ib_reg_mr, + .dereg_mr = usnic_ib_dereg_mr, + .get_dma_mr = usnic_ib_get_dma_mr, + /* Ucontext operations */ + .alloc_ucontext = usnic_ib_alloc_ucontext, + .dealloc_ucontext = usnic_ib_dealloc_ucontext, + .mmap = usnic_ib_mmap, + /* AH operations */ + .create_ah = usnic_ib_create_ah, + .destroy_ah = usnic_ib_destroy_ah, +}; + /* Start of PF discovery section */ static void *usnic_ib_device_add(struct pci_dev *dev) { @@ -386,35 +427,7 @@ static void *usnic_ib_device_add(struct pci_dev *dev) (1ull << IB_USER_VERBS_CMD_DETACH_MCAST) | (1ull << IB_USER_VERBS_CMD_OPEN_QP); - us_ibdev->ib_dev.query_device = usnic_ib_query_device; - us_ibdev->ib_dev.query_port = usnic_ib_query_port; - us_ibdev->ib_dev.query_pkey = usnic_ib_query_pkey; - us_ibdev->ib_dev.query_gid = usnic_ib_query_gid; - us_ibdev->ib_dev.get_netdev = usnic_get_netdev; - us_ibdev->ib_dev.get_link_layer = usnic_ib_port_link_layer; - us_ibdev->ib_dev.alloc_pd = usnic_ib_alloc_pd; - us_ibdev->ib_dev.dealloc_pd = usnic_ib_dealloc_pd; - us_ibdev->ib_dev.create_qp = usnic_ib_create_qp; - us_ibdev->ib_dev.modify_qp = usnic_ib_modify_qp; - us_ibdev->ib_dev.query_qp = usnic_ib_query_qp; - us_ibdev->ib_dev.destroy_qp = usnic_ib_destroy_qp; - us_ibdev->ib_dev.create_cq = usnic_ib_create_cq; - us_ibdev->ib_dev.destroy_cq = usnic_ib_destroy_cq; - us_ibdev->ib_dev.reg_user_mr = usnic_ib_reg_mr; - us_ibdev->ib_dev.dereg_mr = usnic_ib_dereg_mr; - us_ibdev->ib_dev.alloc_ucontext = usnic_ib_alloc_ucontext; - us_ibdev->ib_dev.dealloc_ucontext = usnic_ib_dealloc_ucontext; - us_ibdev->ib_dev.mmap = usnic_ib_mmap; - us_ibdev->ib_dev.create_ah = usnic_ib_create_ah; - us_ibdev->ib_dev.destroy_ah = usnic_ib_destroy_ah; - us_ibdev->ib_dev.post_send = usnic_ib_post_send; - us_ibdev->ib_dev.post_recv = usnic_ib_post_recv; - us_ibdev->ib_dev.poll_cq = usnic_ib_poll_cq; - us_ibdev->ib_dev.req_notify_cq = usnic_ib_req_notify_cq; - us_ibdev->ib_dev.get_dma_mr = usnic_ib_get_dma_mr; - us_ibdev->ib_dev.get_port_immutable = usnic_port_immutable; - us_ibdev->ib_dev.get_dev_fw_str = usnic_get_dev_fw_str; - + ib_set_device_ops(&us_ibdev->ib_dev, &usnic_dev_ops); us_ibdev->ib_dev.driver_id = RDMA_DRIVER_USNIC; rdma_set_device_sysfs_group(&us_ibdev->ib_dev, &usnic_attr_group); From patchwork Mon Nov 5 11:35:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667897 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E7A631751 for ; Mon, 5 Nov 2018 11:36:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D6D7C28DB6 for ; Mon, 5 Nov 2018 11:36:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CA0282978D; Mon, 5 Nov 2018 11:36:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4BD332977F for ; Mon, 5 Nov 2018 11:36:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728757AbeKEUzY (ORCPT ); Mon, 5 Nov 2018 15:55:24 -0500 Received: from mail-wr1-f68.google.com ([209.85.221.68]:36022 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728245AbeKEUzY (ORCPT ); Mon, 5 Nov 2018 15:55:24 -0500 Received: by mail-wr1-f68.google.com with SMTP id z13-v6so6731965wrs.3 for ; Mon, 05 Nov 2018 03:36:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=qlgbmoqvlL8GhcRA8XnDHBisdW68j40ssYY3fn2McVU=; b=NG+c6g8qeE6E+GR4onB7zNHfZf2jyLI7orHyPX56D08eDacZl3zGQ96JtNW3qSSM0u CSEWPfsKFZqQVJTADxKl5mML2N7aH5nH29YJMGg30mFtr/iqEMDeuPv4J0joflYcjfiD YId5Mk4bvEiDqIoxTwM15d2oCRXi9bYazlz8RjCDEeQZEkXjG1KyA+8xJOI20RmqNwiz gy9FozPMIu9uNlHCa7nmYEQi04d83lor6lQ2i1ntbsuCywd6uVem4HRZpZSW4lLILTF+ exIVeOpeyrTWwKfbz6+2gFDDm5i2wwmTfp6CqjxLr22wQzq/tNguRL2Bbm0bTr2qe9nd 3AjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=qlgbmoqvlL8GhcRA8XnDHBisdW68j40ssYY3fn2McVU=; b=e654jpmBwI7qrwmJXS0WxQ76w9k78NW6d6J58m4e3CfmAvSVXw7q6eFCbvEaPGxf1X X8Hm5ooh4r+wwe+fF6+fuu9TgNXRX4vbjFZSdFOLP7+ZNAe/WEw/co5KnD0dZIHBkUca b9CBsYvCiBR0HafDRC3hmsPJNvj+cbBdQXQ0somk+s2D1JU4MjJWjyoa6raUReiLRru+ tkbW2wW0/kCnRmZTOsTSBR28CKynJN9hxZw6UTXP/VtrYM4Tai2FInCUCXvK0SD15Dje EJ79cjw9CtA+spqxwQyvEyYmTm3VUgJguz7N1BmxFnMetV8rLKb1HfcvzcJSVukXATWP a58Q== X-Gm-Message-State: AGRZ1gLXVJeBBCbcwJ0qanJpQP2z5R3oR6AD3NWv4qskQS8FoIThT+dV RY4+n2vt969JLEfKbM/D5To= X-Google-Smtp-Source: AJdET5ewLjKcry6XXAGscroL0sRtEaA7Z/MrHdcHd1jFBznD/wYUOXdZog8Saasif832wYwQ2vz51w== X-Received: by 2002:adf:b6a8:: with SMTP id j40-v6mr17863026wre.55.1541417765045; Mon, 05 Nov 2018 03:36:05 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.36.03 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:36:04 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 16/20] RDMA/vmw_pvrdma: Initialize ib_device_ops struct Date: Mon, 5 Nov 2018 13:35:24 +0200 Message-Id: <20181105113528.8317-17-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c | 92 +++++++++++++++----------- 1 file changed, 55 insertions(+), 37 deletions(-) diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c index 398443f43dc3..ee5352d18be2 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c @@ -161,6 +161,59 @@ static struct net_device *pvrdma_get_netdev(struct ib_device *ibdev, return netdev; } +static const struct ib_device_ops pvrdma_dev_ops = { + /* Device operations */ + .query_device = pvrdma_query_device, + .get_dev_fw_str = pvrdma_get_fw_ver_str, + /* Port operations */ + .query_port = pvrdma_query_port, + .modify_port = pvrdma_modify_port, + .get_netdev = pvrdma_get_netdev, + .get_port_immutable = pvrdma_port_immutable, + .get_link_layer = pvrdma_port_link_layer, + /* GID operations */ + .query_gid = pvrdma_query_gid, + .add_gid = pvrdma_add_gid, + .del_gid = pvrdma_del_gid, + /* PKey operations */ + .query_pkey = pvrdma_query_pkey, + /* Ucontext operations */ + .alloc_ucontext = pvrdma_alloc_ucontext, + .dealloc_ucontext = pvrdma_dealloc_ucontext, + .mmap = pvrdma_mmap, + /* PD operations */ + .alloc_pd = pvrdma_alloc_pd, + .dealloc_pd = pvrdma_dealloc_pd, + /* AH operations */ + .create_ah = pvrdma_create_ah, + .destroy_ah = pvrdma_destroy_ah, + /* QP operations */ + .create_qp = pvrdma_create_qp, + .modify_qp = pvrdma_modify_qp, + .query_qp = pvrdma_query_qp, + .destroy_qp = pvrdma_destroy_qp, + .post_send = pvrdma_post_send, + .post_recv = pvrdma_post_recv, + /* CQ operations */ + .create_cq = pvrdma_create_cq, + .destroy_cq = pvrdma_destroy_cq, + .poll_cq = pvrdma_poll_cq, + .req_notify_cq = pvrdma_req_notify_cq, + /* MR operations */ + .get_dma_mr = pvrdma_get_dma_mr, + .reg_user_mr = pvrdma_reg_user_mr, + .dereg_mr = pvrdma_dereg_mr, + .alloc_mr = pvrdma_alloc_mr, + .map_mr_sg = pvrdma_map_mr_sg, +}; + +static const struct ib_device_ops pvrdma_dev_srq_ops = { + .create_srq = pvrdma_create_srq, + .modify_srq = pvrdma_modify_srq, + .query_srq = pvrdma_query_srq, + .destroy_srq = pvrdma_destroy_srq, +}; + static int pvrdma_register_device(struct pvrdma_dev *dev) { int ret = -1; @@ -197,39 +250,7 @@ static int pvrdma_register_device(struct pvrdma_dev *dev) dev->ib_dev.node_type = RDMA_NODE_IB_CA; dev->ib_dev.phys_port_cnt = dev->dsr->caps.phys_port_cnt; - dev->ib_dev.query_device = pvrdma_query_device; - dev->ib_dev.query_port = pvrdma_query_port; - dev->ib_dev.query_gid = pvrdma_query_gid; - dev->ib_dev.query_pkey = pvrdma_query_pkey; - dev->ib_dev.modify_port = pvrdma_modify_port; - dev->ib_dev.alloc_ucontext = pvrdma_alloc_ucontext; - dev->ib_dev.dealloc_ucontext = pvrdma_dealloc_ucontext; - dev->ib_dev.mmap = pvrdma_mmap; - dev->ib_dev.alloc_pd = pvrdma_alloc_pd; - dev->ib_dev.dealloc_pd = pvrdma_dealloc_pd; - dev->ib_dev.create_ah = pvrdma_create_ah; - dev->ib_dev.destroy_ah = pvrdma_destroy_ah; - dev->ib_dev.create_qp = pvrdma_create_qp; - dev->ib_dev.modify_qp = pvrdma_modify_qp; - dev->ib_dev.query_qp = pvrdma_query_qp; - dev->ib_dev.destroy_qp = pvrdma_destroy_qp; - dev->ib_dev.post_send = pvrdma_post_send; - dev->ib_dev.post_recv = pvrdma_post_recv; - dev->ib_dev.create_cq = pvrdma_create_cq; - dev->ib_dev.destroy_cq = pvrdma_destroy_cq; - dev->ib_dev.poll_cq = pvrdma_poll_cq; - dev->ib_dev.req_notify_cq = pvrdma_req_notify_cq; - dev->ib_dev.get_dma_mr = pvrdma_get_dma_mr; - dev->ib_dev.reg_user_mr = pvrdma_reg_user_mr; - dev->ib_dev.dereg_mr = pvrdma_dereg_mr; - dev->ib_dev.alloc_mr = pvrdma_alloc_mr; - dev->ib_dev.map_mr_sg = pvrdma_map_mr_sg; - dev->ib_dev.add_gid = pvrdma_add_gid; - dev->ib_dev.del_gid = pvrdma_del_gid; - dev->ib_dev.get_netdev = pvrdma_get_netdev; - dev->ib_dev.get_port_immutable = pvrdma_port_immutable; - dev->ib_dev.get_link_layer = pvrdma_port_link_layer; - dev->ib_dev.get_dev_fw_str = pvrdma_get_fw_ver_str; + ib_set_device_ops(&dev->ib_dev, &pvrdma_dev_ops); mutex_init(&dev->port_mutex); spin_lock_init(&dev->desc_lock); @@ -255,10 +276,7 @@ static int pvrdma_register_device(struct pvrdma_dev *dev) (1ull << IB_USER_VERBS_CMD_DESTROY_SRQ) | (1ull << IB_USER_VERBS_CMD_POST_SRQ_RECV); - dev->ib_dev.create_srq = pvrdma_create_srq; - dev->ib_dev.modify_srq = pvrdma_modify_srq; - dev->ib_dev.query_srq = pvrdma_query_srq; - dev->ib_dev.destroy_srq = pvrdma_destroy_srq; + ib_set_device_ops(&dev->ib_dev, &pvrdma_dev_srq_ops); dev->srq_tbl = kcalloc(dev->dsr->caps.max_srq, sizeof(struct pvrdma_srq *), From patchwork Mon Nov 5 11:35:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667899 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CF99F15A6 for ; Mon, 5 Nov 2018 11:36:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BF5A329789 for ; Mon, 5 Nov 2018 11:36:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B386E2978B; Mon, 5 Nov 2018 11:36:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 48AEA28DB6 for ; Mon, 5 Nov 2018 11:36:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728874AbeKEUz0 (ORCPT ); Mon, 5 Nov 2018 15:55:26 -0500 Received: from mail-wm1-f65.google.com ([209.85.128.65]:54357 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728643AbeKEUz0 (ORCPT ); Mon, 5 Nov 2018 15:55:26 -0500 Received: by mail-wm1-f65.google.com with SMTP id r63-v6so7972252wma.4 for ; Mon, 05 Nov 2018 03:36:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=o8btSrAjmux8W0vXk/pvYu1cQPX2KKwNgiqOLHDDycA=; b=T66c1fKcpBwaQwcrxizVspvU9M73vBi7BmuWBoLhXyLXi4fSdFJTCODnKwldahF1e1 MVfq2YTJb7EeUFSxZ+5Uzv2Zo7aEfEXrQaipybqboJ18Nk61RW5gmd67VjdlklE79Tuw NVcXbAkRp1V/mayJKzo42KqkRWNlmdMvbiK/PD1wyKj60Sn0hhym2CFDBzzK7Ml66ACS 1YN4GKCHMsJBifGlYAaSBnSUhn/Th0CvptafgyjBA4zuTW22H2fTpF2z81wkp0v3QCV3 96RSeFct0+DK/gBAJWX05MuZuW0FlSABva3hYZc0puWWySY9b1LwggA2QRsADslWderm fD7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=o8btSrAjmux8W0vXk/pvYu1cQPX2KKwNgiqOLHDDycA=; b=PzFwXlZgUqxihgH3WF21GxkxDi6pqZthID2f42521PTAF47MhXt+mBaRv0TRQFqTFy lCGnImlAkKoXgaZwLhzC8ttL5Qe9AJ55ucikJmnMzH/vcEOx8TdzyR2I7XAU6povKL9E orBdlCB4ohM0+T2SQEU6WG/5rbwXov6rEDUAzXmjtwcwvy1Oi7yVmYS9TRTqbGFRjCfU YtfBhTmeax86ibi5JNEjYHS36OvtCXR5yDGXZF9uD4XxdWRdJ4fwv6U7fYiRn8eI+MvF pjYhXdTKObmhjsh230wGwnpkC1Uke9sF4Mtod9G5wokP6rViWsycoyXSRA4fNTanYhw9 w1lw== X-Gm-Message-State: AGRZ1gLM+HahQkjdbl7HAlpX1w6Ck0hayCqbEAGfpodSFXaKAF3bBDYh s0lNgvXVYbkzwg/klLoKWv8= X-Google-Smtp-Source: AJdET5eTFgpFs7Al4axbH3bXLEo1dr+xWQOCY8V4O55O9pJuMxfJQqM3IXb+Y9eRKtZglQ0eYwqRIw== X-Received: by 2002:a1c:2543:: with SMTP id l64-v6mr6104432wml.74.1541417766308; Mon, 05 Nov 2018 03:36:06 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.36.05 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:36:05 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 17/20] RDMA/rxe: Initialize ib_device_ops struct Date: Mon, 5 Nov 2018 13:35:25 +0200 Message-Id: <20181105113528.8317-18-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/sw/rxe/rxe_verbs.c | 102 ++++++++++++++++++++-------------- 1 file changed, 59 insertions(+), 43 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 9c19f2027511..9ac3467b5a56 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -1157,6 +1157,64 @@ static const struct attribute_group rxe_attr_group = { .attrs = rxe_dev_attributes, }; +static const struct ib_device_ops rxe_dev_ops = { + /* Device operations */ + .query_device = rxe_query_device, + .modify_device = rxe_modify_device, + /* Port operations */ + .query_port = rxe_query_port, + .modify_port = rxe_modify_port, + .get_link_layer = rxe_get_link_layer, + .get_netdev = rxe_get_netdev, + .get_port_immutable = rxe_port_immutable, + /* PKey operations */ + .query_pkey = rxe_query_pkey, + /* Ucontext operations */ + .alloc_ucontext = rxe_alloc_ucontext, + .dealloc_ucontext = rxe_dealloc_ucontext, + .mmap = rxe_mmap, + /* PD operations */ + .alloc_pd = rxe_alloc_pd, + .dealloc_pd = rxe_dealloc_pd, + /* AH operations */ + .create_ah = rxe_create_ah, + .modify_ah = rxe_modify_ah, + .query_ah = rxe_query_ah, + .destroy_ah = rxe_destroy_ah, + /* SRQ operations */ + .create_srq = rxe_create_srq, + .modify_srq = rxe_modify_srq, + .query_srq = rxe_query_srq, + .destroy_srq = rxe_destroy_srq, + .post_srq_recv = rxe_post_srq_recv, + /* QP operations */ + .create_qp = rxe_create_qp, + .modify_qp = rxe_modify_qp, + .query_qp = rxe_query_qp, + .destroy_qp = rxe_destroy_qp, + .post_send = rxe_post_send, + .post_recv = rxe_post_recv, + /* CQ operations */ + .create_cq = rxe_create_cq, + .destroy_cq = rxe_destroy_cq, + .resize_cq = rxe_resize_cq, + .poll_cq = rxe_poll_cq, + .peek_cq = rxe_peek_cq, + .req_notify_cq = rxe_req_notify_cq, + /* MR operations */ + .get_dma_mr = rxe_get_dma_mr, + .reg_user_mr = rxe_reg_user_mr, + .dereg_mr = rxe_dereg_mr, + .alloc_mr = rxe_alloc_mr, + .map_mr_sg = rxe_map_mr_sg, + /* Multicast operations */ + .attach_mcast = rxe_attach_mcast, + .detach_mcast = rxe_detach_mcast, + /* Stats operations */ + .get_hw_stats = rxe_ib_get_hw_stats, + .alloc_hw_stats = rxe_ib_alloc_hw_stats, +}; + int rxe_register_device(struct rxe_dev *rxe) { int err; @@ -1211,49 +1269,7 @@ int rxe_register_device(struct rxe_dev *rxe) | BIT_ULL(IB_USER_VERBS_CMD_DETACH_MCAST) ; - dev->query_device = rxe_query_device; - dev->modify_device = rxe_modify_device; - dev->query_port = rxe_query_port; - dev->modify_port = rxe_modify_port; - dev->get_link_layer = rxe_get_link_layer; - dev->get_netdev = rxe_get_netdev; - dev->query_pkey = rxe_query_pkey; - dev->alloc_ucontext = rxe_alloc_ucontext; - dev->dealloc_ucontext = rxe_dealloc_ucontext; - dev->mmap = rxe_mmap; - dev->get_port_immutable = rxe_port_immutable; - dev->alloc_pd = rxe_alloc_pd; - dev->dealloc_pd = rxe_dealloc_pd; - dev->create_ah = rxe_create_ah; - dev->modify_ah = rxe_modify_ah; - dev->query_ah = rxe_query_ah; - dev->destroy_ah = rxe_destroy_ah; - dev->create_srq = rxe_create_srq; - dev->modify_srq = rxe_modify_srq; - dev->query_srq = rxe_query_srq; - dev->destroy_srq = rxe_destroy_srq; - dev->post_srq_recv = rxe_post_srq_recv; - dev->create_qp = rxe_create_qp; - dev->modify_qp = rxe_modify_qp; - dev->query_qp = rxe_query_qp; - dev->destroy_qp = rxe_destroy_qp; - dev->post_send = rxe_post_send; - dev->post_recv = rxe_post_recv; - dev->create_cq = rxe_create_cq; - dev->destroy_cq = rxe_destroy_cq; - dev->resize_cq = rxe_resize_cq; - dev->poll_cq = rxe_poll_cq; - dev->peek_cq = rxe_peek_cq; - dev->req_notify_cq = rxe_req_notify_cq; - dev->get_dma_mr = rxe_get_dma_mr; - dev->reg_user_mr = rxe_reg_user_mr; - dev->dereg_mr = rxe_dereg_mr; - dev->alloc_mr = rxe_alloc_mr; - dev->map_mr_sg = rxe_map_mr_sg; - dev->attach_mcast = rxe_attach_mcast; - dev->detach_mcast = rxe_detach_mcast; - dev->get_hw_stats = rxe_ib_get_hw_stats; - dev->alloc_hw_stats = rxe_ib_alloc_hw_stats; + ib_set_device_ops(dev, &rxe_dev_ops); tfm = crypto_alloc_shash("crc32", 0, 0); if (IS_ERR(tfm)) { From patchwork Mon Nov 5 11:35:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667901 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E62A117D4 for ; Mon, 5 Nov 2018 11:36:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D560E28DB6 for ; Mon, 5 Nov 2018 11:36:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C9D842978D; Mon, 5 Nov 2018 11:36:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7A1C82977F for ; Mon, 5 Nov 2018 11:36:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728643AbeKEUz0 (ORCPT ); Mon, 5 Nov 2018 15:55:26 -0500 Received: from mail-wm1-f66.google.com ([209.85.128.66]:51768 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727563AbeKEUz0 (ORCPT ); Mon, 5 Nov 2018 15:55:26 -0500 Received: by mail-wm1-f66.google.com with SMTP id w7-v6so7940287wmc.1 for ; Mon, 05 Nov 2018 03:36:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=c3nY6Mt9GVN6B8vISJ1RV6ZA4XwyIGXas9qanU8QFPQ=; b=P4giycBipfGMMHH8e/D1JUdInRdT3ze9IlJht6Ofpv1nm8U2ckh61geH9kEmpWjLef G1v15H3u4sFCUi2yGp1+lxtg9obUJOK6nlwLqVYwD1jlB12XdfHscmfV67ILT4gBZhpM RHOm3PKv4vmkTXOo66CsPQc+qKS3XZ5mgJ70dv6AfqdcN6Sl5hvKDyD5Au/qzcvIqdNO rnHtgfTxFPaA/p7JSkSkIEu3Nvv5FXyDQOL2sIo2qWxwsJHPgHEvVx9gWGWeRLFYoSUB Yqfnv8QUmzbpABnTGXob/DCK1egr4BDa/XXCzk9v+POvD4+BJXDQSZAe0Isq4IfTRHno PU7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=c3nY6Mt9GVN6B8vISJ1RV6ZA4XwyIGXas9qanU8QFPQ=; b=eIqVQH/NsIzquTYCAdJOF4jzaHPOV/jIc62ffjGtv3fqhhK6FmzGQmPwEk2yWfgcf/ mwvryGdrOWGl47rKeo6cP2vFbwXSuniBWmTu3gZ+56o65hzyVHwSzRTqf1NzZi/9W7/o xPatZqGawjoGgWZSG8IQSt+51kLeoljg/hPICDeNooIMZvW5TmVgVLImOAzDF6BIbQ7t dbL+RWFxcR6T4ZfvqtWQaGyOBIVqbmlxR8Je6s22rhVYC+OC9EYJ3zJBwZv8kzGc9WiY mStAsutqnqxJM4Qb2h6HBE0ebDvEc1asDSxx2tKJBZ/XmFZqxzgz9MEu0+eOpw/1T2q5 2pAA== X-Gm-Message-State: AGRZ1gLMGO4eAUmroUJPjI4cEddfBsjlgw32X9BJ/RdI/mWDTdJk5rKU Wal0TFRMN7jB3FKNQwQ+Jv+dexeJ X-Google-Smtp-Source: AJdET5cano0OvX+IyQg0kJa17fgnWXFLOs7dwykub2QGlYtcpFYJvCckL3rnXXxFC+KTVh4zevRp5A== X-Received: by 2002:a1c:e088:: with SMTP id x130-v6mr5646144wmg.6.1541417767618; Mon, 05 Nov 2018 03:36:07 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.36.06 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:36:07 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 18/20] RDMA/rdmavt: Fix rvt_create_ah prototype Date: Mon, 5 Nov 2018 13:35:26 +0200 Message-Id: <20181105113528.8317-19-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The create_ah verb receive as parameter an udata buffer, this parameter is missing in rvt_create_ah - fixing that. Signed-off-by: Kamal Heib --- drivers/infiniband/sw/rdmavt/ah.c | 4 +++- drivers/infiniband/sw/rdmavt/ah.h | 3 ++- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/sw/rdmavt/ah.c b/drivers/infiniband/sw/rdmavt/ah.c index 89ec0f64abfc..084bb4baebb5 100644 --- a/drivers/infiniband/sw/rdmavt/ah.c +++ b/drivers/infiniband/sw/rdmavt/ah.c @@ -91,13 +91,15 @@ EXPORT_SYMBOL(rvt_check_ah); * rvt_create_ah - create an address handle * @pd: the protection domain * @ah_attr: the attributes of the AH + * @udata: pointer to user's input output buffer information. * * This may be called from interrupt context. * * Return: newly allocated ah */ struct ib_ah *rvt_create_ah(struct ib_pd *pd, - struct rdma_ah_attr *ah_attr) + struct rdma_ah_attr *ah_attr, + struct ib_udata *udata) { struct rvt_ah *ah; struct rvt_dev_info *dev = ib_to_rvt(pd->device); diff --git a/drivers/infiniband/sw/rdmavt/ah.h b/drivers/infiniband/sw/rdmavt/ah.h index 16105af99189..25271b48a683 100644 --- a/drivers/infiniband/sw/rdmavt/ah.h +++ b/drivers/infiniband/sw/rdmavt/ah.h @@ -51,7 +51,8 @@ #include struct ib_ah *rvt_create_ah(struct ib_pd *pd, - struct rdma_ah_attr *ah_attr); + struct rdma_ah_attr *ah_attr, + struct ib_udata *udata); int rvt_destroy_ah(struct ib_ah *ibah); int rvt_modify_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr); int rvt_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr); From patchwork Mon Nov 5 11:35:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667903 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A228E1751 for ; Mon, 5 Nov 2018 11:36:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 91A1728DB6 for ; Mon, 5 Nov 2018 11:36:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8612F29789; Mon, 5 Nov 2018 11:36:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C8DBF28DB6 for ; Mon, 5 Nov 2018 11:36:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727563AbeKEUza (ORCPT ); Mon, 5 Nov 2018 15:55:30 -0500 Received: from mail-wm1-f67.google.com ([209.85.128.67]:40309 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728245AbeKEUz3 (ORCPT ); Mon, 5 Nov 2018 15:55:29 -0500 Received: by mail-wm1-f67.google.com with SMTP id b203-v6so7711795wme.5 for ; Mon, 05 Nov 2018 03:36:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=WnIB33Ia+l+lqA55LjwA3WFTcZoqSnRCfiTiHSIexH0=; b=RfFuPTaZBNoCnX5BiVdPom316spwIxKgNTGXiidKiDCxaF1VOvzkj3M9CAK9yWwWzu wANXBTGjAzsYkszduT5Z8pBG1FEu5GjvFfFdXbO/Hs0VhLrZvzimpdXgVcwjBcCa9auo VRJMCyQuQoWOHxt0xxoAYlUJFAoBxueyPfvqjOewrAaugLqy331ex3IiOnHTONktrAM2 Q5NJYxYSTogYYfxir2J0zjtb0TqevDzV6taxIRl7CsaThhxSd9337IwTDdbJQBz6oiY7 YbT2mph9YvwQbwQhHL1PgB0MsG9SQ9SOmvqtP9qqyPyOdm++NvL5cUoTw6NIxQBlbFtY i/JA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=WnIB33Ia+l+lqA55LjwA3WFTcZoqSnRCfiTiHSIexH0=; b=PncFCt4NgCOA95cjtxCFMx2WIKgD2zcPERsWIO+fIbhYfr0kcmeaJRtIglAEXuTj1v +/LDV6isddPt07OKMfPS1hLc0OYJ42B8v292hTHtMBzhqJs0lbh0TNP8J+oTzRfCkUaC V5AXfkMquBzA1pM6Y3/xH3Kts/+at+uXsEH0WvONLV6rHZnNYx4kJ+HrNYfiuYgaDmnP KM7ssh4ePax1DlKISINuuCbtjs9m/eZQDH+T4n0ADmr+LaOe2SELRvxwlaR+wof8cGkD AwViHyrlBOGVQ/vbgAJse/BQLxPfQU265Z9lgxW3uiThkAcetm4fYhctKMpF4+yJsTqK FADg== X-Gm-Message-State: AGRZ1gICQ+yORRuCZwN3IaYcAbvK1SLcxF4+A9zbTSDYATMIHsEAbFgq HgRXCPwgLnJH/TgWwpnkIOanmpgx X-Google-Smtp-Source: AJdET5c6yqSFe2R1pJJOKqaj08eCti8FFWK4bR0T+WZHQb0Tcgi+XHCLideX2Zl9o3bVFsVwT6dNQw== X-Received: by 2002:a1c:770a:: with SMTP id t10-v6mr6094701wmi.149.1541417768857; Mon, 05 Nov 2018 03:36:08 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.36.07 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:36:08 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 19/20] RDMA/rdmavt: Initialize ib_device_ops struct Date: Mon, 5 Nov 2018 13:35:27 +0200 Message-Id: <20181105113528.8317-20-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops() and remove the use of check_driver_override(). Signed-off-by: Kamal Heib --- drivers/infiniband/sw/rdmavt/vt.c | 312 ++++++++------------------------------ 1 file changed, 67 insertions(+), 245 deletions(-) diff --git a/drivers/infiniband/sw/rdmavt/vt.c b/drivers/infiniband/sw/rdmavt/vt.c index 723d3daf2eba..cf7aab970cb4 100644 --- a/drivers/infiniband/sw/rdmavt/vt.c +++ b/drivers/infiniband/sw/rdmavt/vt.c @@ -392,16 +392,64 @@ enum { _VERB_IDX_MAX /* Must always be last! */ }; -static inline int check_driver_override(struct rvt_dev_info *rdi, - size_t offset, void *func) -{ - if (!*(void **)((void *)&rdi->ibdev + offset)) { - *(void **)((void *)&rdi->ibdev + offset) = func; - return 0; - } - - return 1; -} +static const struct ib_device_ops rvt_dev_ops = { + /* Device operations */ + .query_device = rvt_query_device, + .modify_device = rvt_modify_device, + /* Port operations */ + .query_port = rvt_query_port, + .modify_port = rvt_modify_port, + .get_port_immutable = rvt_get_port_immutable, + /* PKey operations */ + .query_pkey = rvt_query_pkey, + /* GID operations */ + .query_gid = rvt_query_gid, + /* Ucontext operations */ + .alloc_ucontext = rvt_alloc_ucontext, + .dealloc_ucontext = rvt_dealloc_ucontext, + .mmap = rvt_mmap, + /* QP operations */ + .create_qp = rvt_create_qp, + .modify_qp = rvt_modify_qp, + .destroy_qp = rvt_destroy_qp, + .query_qp = rvt_query_qp, + .post_send = rvt_post_send, + .post_recv = rvt_post_recv, + /* SRQ operations */ + .post_srq_recv = rvt_post_srq_recv, + .create_srq = rvt_create_srq, + .modify_srq = rvt_modify_srq, + .destroy_srq = rvt_destroy_srq, + .query_srq = rvt_query_srq, + /* AH operations */ + .create_ah = rvt_create_ah, + .destroy_ah = rvt_destroy_ah, + .modify_ah = rvt_modify_ah, + .query_ah = rvt_query_ah, + /* Multicast operations */ + .attach_mcast = rvt_attach_mcast, + .detach_mcast = rvt_detach_mcast, + /* MR operations */ + .get_dma_mr = rvt_get_dma_mr, + .reg_user_mr = rvt_reg_user_mr, + .dereg_mr = rvt_dereg_mr, + .alloc_mr = rvt_alloc_mr, + .map_mr_sg = rvt_map_mr_sg, + /* FMR operations */ + .alloc_fmr = rvt_alloc_fmr, + .map_phys_fmr = rvt_map_phys_fmr, + .unmap_fmr = rvt_unmap_fmr, + .dealloc_fmr = rvt_dealloc_fmr, + /* CQ operations */ + .create_cq = rvt_create_cq, + .destroy_cq = rvt_destroy_cq, + .poll_cq = rvt_poll_cq, + .req_notify_cq = rvt_req_notify_cq, + .resize_cq = rvt_resize_cq, + /* PD operations */ + .alloc_pd = rvt_alloc_pd, + .dealloc_pd = rvt_dealloc_pd, +}; static noinline int check_support(struct rvt_dev_info *rdi, int verb) { @@ -416,76 +464,36 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) return -EINVAL; break; - case QUERY_DEVICE: - check_driver_override(rdi, offsetof(struct ib_device, - query_device), - rvt_query_device); - break; - case MODIFY_DEVICE: /* * rdmavt does not support modify device currently drivers must * provide. */ - if (!check_driver_override(rdi, offsetof(struct ib_device, - modify_device), - rvt_modify_device)) + if (!rdi->ibdev.modify_device) return -EOPNOTSUPP; break; case QUERY_PORT: - if (!check_driver_override(rdi, offsetof(struct ib_device, - query_port), - rvt_query_port)) + if (!rdi->ibdev.query_port) if (!rdi->driver_f.query_port_state) return -EINVAL; break; case MODIFY_PORT: - if (!check_driver_override(rdi, offsetof(struct ib_device, - modify_port), - rvt_modify_port)) + if (!rdi->ibdev.modify_port) if (!rdi->driver_f.cap_mask_chg || !rdi->driver_f.shut_down_port) return -EINVAL; break; - case QUERY_PKEY: - check_driver_override(rdi, offsetof(struct ib_device, - query_pkey), - rvt_query_pkey); - break; - case QUERY_GID: - if (!check_driver_override(rdi, offsetof(struct ib_device, - query_gid), - rvt_query_gid)) + if (!rdi->ibdev.query_gid) if (!rdi->driver_f.get_guid_be) return -EINVAL; break; - case ALLOC_UCONTEXT: - check_driver_override(rdi, offsetof(struct ib_device, - alloc_ucontext), - rvt_alloc_ucontext); - break; - - case DEALLOC_UCONTEXT: - check_driver_override(rdi, offsetof(struct ib_device, - dealloc_ucontext), - rvt_dealloc_ucontext); - break; - - case GET_PORT_IMMUTABLE: - check_driver_override(rdi, offsetof(struct ib_device, - get_port_immutable), - rvt_get_port_immutable); - break; - case CREATE_QP: - if (!check_driver_override(rdi, offsetof(struct ib_device, - create_qp), - rvt_create_qp)) + if (!rdi->ibdev.create_qp) if (!rdi->driver_f.qp_priv_alloc || !rdi->driver_f.qp_priv_free || !rdi->driver_f.notify_qp_reset || @@ -496,9 +504,7 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) break; case MODIFY_QP: - if (!check_driver_override(rdi, offsetof(struct ib_device, - modify_qp), - rvt_modify_qp)) + if (!rdi->ibdev.modify_qp) if (!rdi->driver_f.notify_qp_reset || !rdi->driver_f.schedule_send || !rdi->driver_f.get_pmtu_from_attr || @@ -512,9 +518,7 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) break; case DESTROY_QP: - if (!check_driver_override(rdi, offsetof(struct ib_device, - destroy_qp), - rvt_destroy_qp)) + if (!rdi->ibdev.destroy_qp) if (!rdi->driver_f.qp_priv_free || !rdi->driver_f.notify_qp_reset || !rdi->driver_f.flush_qp_waiters || @@ -523,197 +527,14 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) return -EINVAL; break; - case QUERY_QP: - check_driver_override(rdi, offsetof(struct ib_device, - query_qp), - rvt_query_qp); - break; - case POST_SEND: - if (!check_driver_override(rdi, offsetof(struct ib_device, - post_send), - rvt_post_send)) + if (!rdi->ibdev.post_send) if (!rdi->driver_f.schedule_send || !rdi->driver_f.do_send || !rdi->post_parms) return -EINVAL; break; - case POST_RECV: - check_driver_override(rdi, offsetof(struct ib_device, - post_recv), - rvt_post_recv); - break; - case POST_SRQ_RECV: - check_driver_override(rdi, offsetof(struct ib_device, - post_srq_recv), - rvt_post_srq_recv); - break; - - case CREATE_AH: - check_driver_override(rdi, offsetof(struct ib_device, - create_ah), - rvt_create_ah); - break; - - case DESTROY_AH: - check_driver_override(rdi, offsetof(struct ib_device, - destroy_ah), - rvt_destroy_ah); - break; - - case MODIFY_AH: - check_driver_override(rdi, offsetof(struct ib_device, - modify_ah), - rvt_modify_ah); - break; - - case QUERY_AH: - check_driver_override(rdi, offsetof(struct ib_device, - query_ah), - rvt_query_ah); - break; - - case CREATE_SRQ: - check_driver_override(rdi, offsetof(struct ib_device, - create_srq), - rvt_create_srq); - break; - - case MODIFY_SRQ: - check_driver_override(rdi, offsetof(struct ib_device, - modify_srq), - rvt_modify_srq); - break; - - case DESTROY_SRQ: - check_driver_override(rdi, offsetof(struct ib_device, - destroy_srq), - rvt_destroy_srq); - break; - - case QUERY_SRQ: - check_driver_override(rdi, offsetof(struct ib_device, - query_srq), - rvt_query_srq); - break; - - case ATTACH_MCAST: - check_driver_override(rdi, offsetof(struct ib_device, - attach_mcast), - rvt_attach_mcast); - break; - - case DETACH_MCAST: - check_driver_override(rdi, offsetof(struct ib_device, - detach_mcast), - rvt_detach_mcast); - break; - - case GET_DMA_MR: - check_driver_override(rdi, offsetof(struct ib_device, - get_dma_mr), - rvt_get_dma_mr); - break; - - case REG_USER_MR: - check_driver_override(rdi, offsetof(struct ib_device, - reg_user_mr), - rvt_reg_user_mr); - break; - - case DEREG_MR: - check_driver_override(rdi, offsetof(struct ib_device, - dereg_mr), - rvt_dereg_mr); - break; - - case ALLOC_FMR: - check_driver_override(rdi, offsetof(struct ib_device, - alloc_fmr), - rvt_alloc_fmr); - break; - - case ALLOC_MR: - check_driver_override(rdi, offsetof(struct ib_device, - alloc_mr), - rvt_alloc_mr); - break; - - case MAP_MR_SG: - check_driver_override(rdi, offsetof(struct ib_device, - map_mr_sg), - rvt_map_mr_sg); - break; - - case MAP_PHYS_FMR: - check_driver_override(rdi, offsetof(struct ib_device, - map_phys_fmr), - rvt_map_phys_fmr); - break; - - case UNMAP_FMR: - check_driver_override(rdi, offsetof(struct ib_device, - unmap_fmr), - rvt_unmap_fmr); - break; - - case DEALLOC_FMR: - check_driver_override(rdi, offsetof(struct ib_device, - dealloc_fmr), - rvt_dealloc_fmr); - break; - - case MMAP: - check_driver_override(rdi, offsetof(struct ib_device, - mmap), - rvt_mmap); - break; - - case CREATE_CQ: - check_driver_override(rdi, offsetof(struct ib_device, - create_cq), - rvt_create_cq); - break; - - case DESTROY_CQ: - check_driver_override(rdi, offsetof(struct ib_device, - destroy_cq), - rvt_destroy_cq); - break; - - case POLL_CQ: - check_driver_override(rdi, offsetof(struct ib_device, - poll_cq), - rvt_poll_cq); - break; - - case REQ_NOTFIY_CQ: - check_driver_override(rdi, offsetof(struct ib_device, - req_notify_cq), - rvt_req_notify_cq); - break; - - case RESIZE_CQ: - check_driver_override(rdi, offsetof(struct ib_device, - resize_cq), - rvt_resize_cq); - break; - - case ALLOC_PD: - check_driver_override(rdi, offsetof(struct ib_device, - alloc_pd), - rvt_alloc_pd); - break; - - case DEALLOC_PD: - check_driver_override(rdi, offsetof(struct ib_device, - dealloc_pd), - rvt_dealloc_pd); - break; - - default: - return -EINVAL; } return 0; @@ -745,6 +566,7 @@ int rvt_register_device(struct rvt_dev_info *rdi, u32 driver_id) return -EINVAL; } + ib_set_device_ops(&rdi->ibdev, &rvt_dev_ops); /* Once we get past here we can use rvt_pr macros and tracepoints */ trace_rvt_dbg(rdi, "Driver attempting registration"); From patchwork Mon Nov 5 11:35:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667905 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 65B6C15A6 for ; Mon, 5 Nov 2018 11:36:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 48FD928DB6 for ; Mon, 5 Nov 2018 11:36:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 388D22977D; Mon, 5 Nov 2018 11:36:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EE76F28DB6 for ; Mon, 5 Nov 2018 11:36:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728245AbeKEUzf (ORCPT ); Mon, 5 Nov 2018 15:55:35 -0500 Received: from mail-wr1-f65.google.com ([209.85.221.65]:43438 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726412AbeKEUze (ORCPT ); Mon, 5 Nov 2018 15:55:34 -0500 Received: by mail-wr1-f65.google.com with SMTP id y3-v6so8808897wrh.10 for ; Mon, 05 Nov 2018 03:36:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=509nwtXmRXCzFNCe2p5GxZHxDr1KbAPSB2vl+ZKGY/Q=; b=ASa67MxFTxoYpoNIQpCaeSD03Fzb4ESxy0YaUX2FWXk1TJP8wzx0/3D7KBmNqOqxen Nzz9YAYLiA8ccvrAMQlmVVNgfe+IDja8Irx7iLIRfeDUdpC54VlwI1fGP3c8pB8axIS7 tNOzQizS194iwPoG5GDNDakCn2H7JSfye1ElNoaCj4x43VxdCmlZrfU3PGPeNoiFXaAF 0fnZbL4gMTb1xMa+uAj9cvpW+TP3mjqtdHq4iqLCOxA5zenDFsfXr1Lc9NUC87yorDhI +fhcjztJVdMXA1pbTxXBW4DWMnhFp/0h6n8O2zpxMBC6pYfoHK7uBYi8eyhfB4W4iCve 4x9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=509nwtXmRXCzFNCe2p5GxZHxDr1KbAPSB2vl+ZKGY/Q=; b=B7Kh8EYq29tioOHxsLEcGt9Df63h5dvK+E/qBaX/3y0xnCuQ2wbjFy8Ur8UT4WGgky BT0RPHphVQYjcKROopZnXTZoHQKhP594IzSaAzSSLPxmVb9X2/CIZCtbgOM0uNN0MFi3 v41L85ld/Y/wgf1hWrWRDJKgbuaD6EwnLz5RpDZMcdshhcdAglXCz0AeH/Tjug09a9DE C28fHdj8MImeRU5wLqs10+QB0fpWAWYFtXepQJdI9HZsqvaKK+1kfkHo1BCUq2KlEjF5 5taTIKxfVJNvD8ztTwzixX32mO/82f8sIDsaQaPx6Hp/H9IDmVrCmDKxoTj6hGNubO79 K/Zw== X-Gm-Message-State: AGRZ1gJZC/IFZ5qXfDVRZnwmO60AlfsI+Tj/ogllkkqBEyB5QP9jhImA q2a5kAESfOA5byTQ1lUVqP4= X-Google-Smtp-Source: AJdET5fi39ZRFpr5bWMNc9PdbPr9OUM1hsuBpu3c/lHkgyfpVXZXd3UdKmW6iBVcMaLIGfADIwB43Q== X-Received: by 2002:adf:afdc:: with SMTP id y28-v6mr20430096wrd.176.1541417770461; Mon, 05 Nov 2018 03:36:10 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.36.08 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:36:09 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 20/20] RDMA: Start use ib_device_ops Date: Mon, 5 Nov 2018 13:35:28 +0200 Message-Id: <20181105113528.8317-21-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Make all the required change to start use the ib_device_ops structure. Signed-off-by: Kamal Heib --- drivers/infiniband/core/cache.c | 12 +- drivers/infiniband/core/core_priv.h | 12 +- drivers/infiniband/core/cq.c | 6 +- drivers/infiniband/core/device.c | 209 +++++++------- drivers/infiniband/core/fmr_pool.c | 4 +- drivers/infiniband/core/mad.c | 24 +- drivers/infiniband/core/nldev.c | 4 +- drivers/infiniband/core/opa_smi.h | 4 +- drivers/infiniband/core/rdma_core.c | 6 +- drivers/infiniband/core/security.c | 8 +- drivers/infiniband/core/smi.h | 4 +- drivers/infiniband/core/sysfs.c | 28 +- drivers/infiniband/core/ucm.c | 2 +- drivers/infiniband/core/uverbs_cmd.c | 64 ++--- drivers/infiniband/core/uverbs_main.c | 14 +- drivers/infiniband/core/uverbs_std_types.c | 2 +- .../infiniband/core/uverbs_std_types_counters.c | 10 +- drivers/infiniband/core/uverbs_std_types_cq.c | 4 +- drivers/infiniband/core/uverbs_std_types_dm.c | 6 +- .../infiniband/core/uverbs_std_types_flow_action.c | 14 +- drivers/infiniband/core/uverbs_std_types_mr.c | 4 +- drivers/infiniband/core/verbs.c | 149 +++++----- drivers/infiniband/hw/i40iw/i40iw_cm.c | 2 +- drivers/infiniband/hw/mlx4/alias_GUID.c | 2 +- drivers/infiniband/hw/mlx5/main.c | 2 +- drivers/infiniband/hw/nes/nes_cm.c | 2 +- drivers/infiniband/sw/rdmavt/vt.c | 16 +- drivers/infiniband/ulp/ipoib/ipoib_main.c | 12 +- drivers/infiniband/ulp/iser/iser_memory.c | 4 +- drivers/infiniband/ulp/opa_vnic/opa_vnic_netdev.c | 8 +- drivers/infiniband/ulp/srp/ib_srp.c | 6 +- fs/cifs/smbdirect.c | 2 +- include/rdma/ib_verbs.h | 299 +-------------------- net/rds/ib.c | 4 +- net/sunrpc/xprtrdma/fmr_ops.c | 2 +- 35 files changed, 345 insertions(+), 606 deletions(-) diff --git a/drivers/infiniband/core/cache.c b/drivers/infiniband/core/cache.c index 5b2fce4a7091..22e20ed5a393 100644 --- a/drivers/infiniband/core/cache.c +++ b/drivers/infiniband/core/cache.c @@ -217,7 +217,7 @@ static void free_gid_entry_locked(struct ib_gid_table_entry *entry) if (rdma_cap_roce_gid_table(device, port_num) && entry->state != GID_TABLE_ENTRY_INVALID) - device->del_gid(&entry->attr, &entry->context); + device->ops.del_gid(&entry->attr, &entry->context); write_lock_irq(&table->rwlock); @@ -324,7 +324,7 @@ static int add_roce_gid(struct ib_gid_table_entry *entry) return -EINVAL; } if (rdma_cap_roce_gid_table(attr->device, attr->port_num)) { - ret = attr->device->add_gid(attr, &entry->context); + ret = attr->device->ops.add_gid(attr, &entry->context); if (ret) { dev_err(&attr->device->dev, "%s GID add failed port=%d index=%d\n", @@ -548,8 +548,8 @@ int ib_cache_gid_add(struct ib_device *ib_dev, u8 port, unsigned long mask; int ret; - if (ib_dev->get_netdev) { - idev = ib_dev->get_netdev(ib_dev, port); + if (ib_dev->ops.get_netdev) { + idev = ib_dev->ops.get_netdev(ib_dev, port); if (idev && attr->ndev != idev) { union ib_gid default_gid; @@ -1296,9 +1296,9 @@ static int config_non_roce_gid_cache(struct ib_device *device, mutex_lock(&table->lock); for (i = 0; i < gid_tbl_len; ++i) { - if (!device->query_gid) + if (!device->ops.query_gid) continue; - ret = device->query_gid(device, port, i, &gid_attr.gid); + ret = device->ops.query_gid(device, port, i, &gid_attr.gid); if (ret) { dev_warn(&device->dev, "query_gid failed (%d) for index %d\n", ret, diff --git a/drivers/infiniband/core/core_priv.h b/drivers/infiniband/core/core_priv.h index bb9007a0cca7..bc499f2f6bca 100644 --- a/drivers/infiniband/core/core_priv.h +++ b/drivers/infiniband/core/core_priv.h @@ -244,10 +244,10 @@ static inline int ib_security_modify_qp(struct ib_qp *qp, int qp_attr_mask, struct ib_udata *udata) { - return qp->device->modify_qp(qp->real_qp, - qp_attr, - qp_attr_mask, - udata); + return qp->device->ops.modify_qp(qp->real_qp, + qp_attr, + qp_attr_mask, + udata); } static inline int ib_create_qp_security(struct ib_qp *qp, @@ -308,10 +308,10 @@ static inline struct ib_qp *_ib_create_qp(struct ib_device *dev, { struct ib_qp *qp; - if (!dev->create_qp) + if (!dev->ops.create_qp) return ERR_PTR(-EOPNOTSUPP); - qp = dev->create_qp(pd, attr, udata); + qp = dev->ops.create_qp(pd, attr, udata); if (IS_ERR(qp)) return qp; diff --git a/drivers/infiniband/core/cq.c b/drivers/infiniband/core/cq.c index b1e5365ddafa..7fb4f64ae933 100644 --- a/drivers/infiniband/core/cq.c +++ b/drivers/infiniband/core/cq.c @@ -145,7 +145,7 @@ struct ib_cq *__ib_alloc_cq(struct ib_device *dev, void *private, struct ib_cq *cq; int ret = -ENOMEM; - cq = dev->create_cq(dev, &cq_attr, NULL, NULL); + cq = dev->ops.create_cq(dev, &cq_attr, NULL, NULL); if (IS_ERR(cq)) return cq; @@ -193,7 +193,7 @@ struct ib_cq *__ib_alloc_cq(struct ib_device *dev, void *private, kfree(cq->wc); rdma_restrack_del(&cq->res); out_destroy_cq: - cq->device->destroy_cq(cq); + cq->device->ops.destroy_cq(cq); return ERR_PTR(ret); } EXPORT_SYMBOL(__ib_alloc_cq); @@ -225,7 +225,7 @@ void ib_free_cq(struct ib_cq *cq) kfree(cq->wc); rdma_restrack_del(&cq->res); - ret = cq->device->destroy_cq(cq); + ret = cq->device->ops.destroy_cq(cq); WARN_ON_ONCE(ret); } EXPORT_SYMBOL(ib_free_cq); diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c index 2fefb9d694dc..fb933d518754 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c @@ -96,7 +96,7 @@ static struct notifier_block ibdev_lsm_nb = { static int ib_device_check_mandatory(struct ib_device *device) { -#define IB_MANDATORY_FUNC(x) { offsetof(struct ib_device, x), #x } +#define IB_MANDATORY_FUNC(x) { offsetof(struct ib_device_ops, x), #x } static const struct { size_t offset; char *name; @@ -122,7 +122,8 @@ static int ib_device_check_mandatory(struct ib_device *device) int i; for (i = 0; i < ARRAY_SIZE(mandatory_table); ++i) { - if (!*(void **) ((void *) device + mandatory_table[i].offset)) { + if (!*(void **) ((void *) &device->ops + + mandatory_table[i].offset)) { dev_warn(&device->dev, "Device is missing mandatory function %s\n", mandatory_table[i].name); @@ -362,8 +363,8 @@ static int read_port_immutable(struct ib_device *device) return -ENOMEM; for (port = start_port; port <= end_port; ++port) { - ret = device->get_port_immutable(device, port, - &device->port_immutable[port]); + ret = device->ops.get_port_immutable(device, port, + &device->port_immutable[port]); if (ret) return ret; @@ -375,8 +376,8 @@ static int read_port_immutable(struct ib_device *device) void ib_get_device_fw_str(struct ib_device *dev, char *str) { - if (dev->get_dev_fw_str) - dev->get_dev_fw_str(dev, str); + if (dev->ops.get_dev_fw_str) + dev->ops.get_dev_fw_str(dev, str); else str[0] = '\0'; } @@ -525,7 +526,7 @@ static int setup_device(struct ib_device *device) } memset(&device->attrs, 0, sizeof(device->attrs)); - ret = device->query_device(device, &device->attrs, &uhw); + ret = device->ops.query_device(device, &device->attrs, &uhw); if (ret) { dev_warn(&device->dev, "Couldn't query the device attributes\n"); @@ -905,14 +906,14 @@ int ib_query_port(struct ib_device *device, return -EINVAL; memset(port_attr, 0, sizeof(*port_attr)); - err = device->query_port(device, port_num, port_attr); + err = device->ops.query_port(device, port_num, port_attr); if (err || port_attr->subnet_prefix) return err; if (rdma_port_get_link_layer(device, port_num) != IB_LINK_LAYER_INFINIBAND) return 0; - err = device->query_gid(device, port_num, 0, &gid); + err = device->ops.query_gid(device, port_num, 0, &gid); if (err) return err; @@ -946,8 +947,8 @@ void ib_enum_roce_netdev(struct ib_device *ib_dev, if (rdma_protocol_roce(ib_dev, port)) { struct net_device *idev = NULL; - if (ib_dev->get_netdev) - idev = ib_dev->get_netdev(ib_dev, port); + if (ib_dev->ops.get_netdev) + idev = ib_dev->ops.get_netdev(ib_dev, port); if (idev && idev->reg_state >= NETREG_UNREGISTERED) { @@ -1024,7 +1025,7 @@ int ib_enum_all_devs(nldev_callback nldev_cb, struct sk_buff *skb, int ib_query_pkey(struct ib_device *device, u8 port_num, u16 index, u16 *pkey) { - return device->query_pkey(device, port_num, index, pkey); + return device->ops.query_pkey(device, port_num, index, pkey); } EXPORT_SYMBOL(ib_query_pkey); @@ -1041,11 +1042,11 @@ int ib_modify_device(struct ib_device *device, int device_modify_mask, struct ib_device_modify *device_modify) { - if (!device->modify_device) + if (!device->ops.modify_device) return -ENOSYS; - return device->modify_device(device, device_modify_mask, - device_modify); + return device->ops.modify_device(device, device_modify_mask, + device_modify); } EXPORT_SYMBOL(ib_modify_device); @@ -1069,9 +1070,10 @@ int ib_modify_port(struct ib_device *device, if (!rdma_is_port_valid(device, port_num)) return -EINVAL; - if (device->modify_port) - rc = device->modify_port(device, port_num, port_modify_mask, - port_modify); + if (device->ops.modify_port) + rc = device->ops.modify_port(device, port_num, + port_modify_mask, + port_modify); else rc = rdma_protocol_roce(device, port_num) ? 0 : -ENOSYS; return rc; @@ -1200,6 +1202,7 @@ EXPORT_SYMBOL(ib_get_net_dev_by_params); void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops) { + struct ib_device_ops *dev_ops = &dev->ops; #define SET_DEVICE_OP(ptr, name) \ do { \ if (ops->name) \ @@ -1207,91 +1210,91 @@ void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops) (ptr)->name = ops->name; \ } while (0) - SET_DEVICE_OP(dev, query_device); - SET_DEVICE_OP(dev, modify_device); - SET_DEVICE_OP(dev, get_dev_fw_str); - SET_DEVICE_OP(dev, get_vector_affinity); - SET_DEVICE_OP(dev, query_port); - SET_DEVICE_OP(dev, modify_port); - SET_DEVICE_OP(dev, get_port_immutable); - SET_DEVICE_OP(dev, get_link_layer); - SET_DEVICE_OP(dev, get_netdev); - SET_DEVICE_OP(dev, alloc_rdma_netdev); - SET_DEVICE_OP(dev, query_gid); - SET_DEVICE_OP(dev, add_gid); - SET_DEVICE_OP(dev, del_gid); - SET_DEVICE_OP(dev, query_pkey); - SET_DEVICE_OP(dev, alloc_ucontext); - SET_DEVICE_OP(dev, dealloc_ucontext); - SET_DEVICE_OP(dev, mmap); - SET_DEVICE_OP(dev, disassociate_ucontext); - SET_DEVICE_OP(dev, alloc_pd); - SET_DEVICE_OP(dev, dealloc_pd); - SET_DEVICE_OP(dev, create_ah); - SET_DEVICE_OP(dev, modify_ah); - SET_DEVICE_OP(dev, query_ah); - SET_DEVICE_OP(dev, destroy_ah); - SET_DEVICE_OP(dev, create_srq); - SET_DEVICE_OP(dev, modify_srq); - SET_DEVICE_OP(dev, query_srq); - SET_DEVICE_OP(dev, destroy_srq); - SET_DEVICE_OP(dev, post_srq_recv); - SET_DEVICE_OP(dev, create_qp); - SET_DEVICE_OP(dev, modify_qp); - SET_DEVICE_OP(dev, query_qp); - SET_DEVICE_OP(dev, destroy_qp); - SET_DEVICE_OP(dev, post_send); - SET_DEVICE_OP(dev, post_recv); - SET_DEVICE_OP(dev, drain_rq); - SET_DEVICE_OP(dev, drain_sq); - SET_DEVICE_OP(dev, create_cq); - SET_DEVICE_OP(dev, modify_cq); - SET_DEVICE_OP(dev, destroy_cq); - SET_DEVICE_OP(dev, resize_cq); - SET_DEVICE_OP(dev, poll_cq); - SET_DEVICE_OP(dev, peek_cq); - SET_DEVICE_OP(dev, req_notify_cq); - SET_DEVICE_OP(dev, req_ncomp_notif); - SET_DEVICE_OP(dev, get_dma_mr); - SET_DEVICE_OP(dev, reg_user_mr); - SET_DEVICE_OP(dev, rereg_user_mr); - SET_DEVICE_OP(dev, dereg_mr); - SET_DEVICE_OP(dev, alloc_mr); - SET_DEVICE_OP(dev, map_mr_sg); - SET_DEVICE_OP(dev, check_mr_status); - SET_DEVICE_OP(dev, alloc_mw); - SET_DEVICE_OP(dev, dealloc_mw); - SET_DEVICE_OP(dev, alloc_fmr); - SET_DEVICE_OP(dev, map_phys_fmr); - SET_DEVICE_OP(dev, unmap_fmr); - SET_DEVICE_OP(dev, dealloc_fmr); - SET_DEVICE_OP(dev, attach_mcast); - SET_DEVICE_OP(dev, detach_mcast); - SET_DEVICE_OP(dev, process_mad); - SET_DEVICE_OP(dev, alloc_xrcd); - SET_DEVICE_OP(dev, dealloc_xrcd); - SET_DEVICE_OP(dev, create_flow); - SET_DEVICE_OP(dev, destroy_flow); - SET_DEVICE_OP(dev, create_flow_action_esp); - SET_DEVICE_OP(dev, destroy_flow_action); - SET_DEVICE_OP(dev, modify_flow_action_esp); - SET_DEVICE_OP(dev, set_vf_link_state); - SET_DEVICE_OP(dev, get_vf_config); - SET_DEVICE_OP(dev, get_vf_stats); - SET_DEVICE_OP(dev, set_vf_guid); - SET_DEVICE_OP(dev, create_wq); - SET_DEVICE_OP(dev, destroy_wq); - SET_DEVICE_OP(dev, modify_wq); - SET_DEVICE_OP(dev, create_rwq_ind_table); - SET_DEVICE_OP(dev, destroy_rwq_ind_table); - SET_DEVICE_OP(dev, alloc_dm); - SET_DEVICE_OP(dev, dealloc_dm); - SET_DEVICE_OP(dev, reg_dm_mr); - SET_DEVICE_OP(dev, create_counters); - SET_DEVICE_OP(dev, destroy_counters); - SET_DEVICE_OP(dev, read_counters); - SET_DEVICE_OP(dev, alloc_hw_stats); - SET_DEVICE_OP(dev, get_hw_stats); + SET_DEVICE_OP(dev_ops, query_device); + SET_DEVICE_OP(dev_ops, modify_device); + SET_DEVICE_OP(dev_ops, get_dev_fw_str); + SET_DEVICE_OP(dev_ops, get_vector_affinity); + SET_DEVICE_OP(dev_ops, query_port); + SET_DEVICE_OP(dev_ops, modify_port); + SET_DEVICE_OP(dev_ops, get_port_immutable); + SET_DEVICE_OP(dev_ops, get_link_layer); + SET_DEVICE_OP(dev_ops, get_netdev); + SET_DEVICE_OP(dev_ops, alloc_rdma_netdev); + SET_DEVICE_OP(dev_ops, query_gid); + SET_DEVICE_OP(dev_ops, add_gid); + SET_DEVICE_OP(dev_ops, del_gid); + SET_DEVICE_OP(dev_ops, query_pkey); + SET_DEVICE_OP(dev_ops, alloc_ucontext); + SET_DEVICE_OP(dev_ops, dealloc_ucontext); + SET_DEVICE_OP(dev_ops, mmap); + SET_DEVICE_OP(dev_ops, disassociate_ucontext); + SET_DEVICE_OP(dev_ops, alloc_pd); + SET_DEVICE_OP(dev_ops, dealloc_pd); + SET_DEVICE_OP(dev_ops, create_ah); + SET_DEVICE_OP(dev_ops, modify_ah); + SET_DEVICE_OP(dev_ops, query_ah); + SET_DEVICE_OP(dev_ops, destroy_ah); + SET_DEVICE_OP(dev_ops, create_srq); + SET_DEVICE_OP(dev_ops, modify_srq); + SET_DEVICE_OP(dev_ops, query_srq); + SET_DEVICE_OP(dev_ops, destroy_srq); + SET_DEVICE_OP(dev_ops, post_srq_recv); + SET_DEVICE_OP(dev_ops, create_qp); + SET_DEVICE_OP(dev_ops, modify_qp); + SET_DEVICE_OP(dev_ops, query_qp); + SET_DEVICE_OP(dev_ops, destroy_qp); + SET_DEVICE_OP(dev_ops, post_send); + SET_DEVICE_OP(dev_ops, post_recv); + SET_DEVICE_OP(dev_ops, drain_rq); + SET_DEVICE_OP(dev_ops, drain_sq); + SET_DEVICE_OP(dev_ops, create_cq); + SET_DEVICE_OP(dev_ops, modify_cq); + SET_DEVICE_OP(dev_ops, destroy_cq); + SET_DEVICE_OP(dev_ops, resize_cq); + SET_DEVICE_OP(dev_ops, poll_cq); + SET_DEVICE_OP(dev_ops, peek_cq); + SET_DEVICE_OP(dev_ops, req_notify_cq); + SET_DEVICE_OP(dev_ops, req_ncomp_notif); + SET_DEVICE_OP(dev_ops, get_dma_mr); + SET_DEVICE_OP(dev_ops, reg_user_mr); + SET_DEVICE_OP(dev_ops, rereg_user_mr); + SET_DEVICE_OP(dev_ops, dereg_mr); + SET_DEVICE_OP(dev_ops, alloc_mr); + SET_DEVICE_OP(dev_ops, map_mr_sg); + SET_DEVICE_OP(dev_ops, check_mr_status); + SET_DEVICE_OP(dev_ops, alloc_mw); + SET_DEVICE_OP(dev_ops, dealloc_mw); + SET_DEVICE_OP(dev_ops, alloc_fmr); + SET_DEVICE_OP(dev_ops, map_phys_fmr); + SET_DEVICE_OP(dev_ops, unmap_fmr); + SET_DEVICE_OP(dev_ops, dealloc_fmr); + SET_DEVICE_OP(dev_ops, attach_mcast); + SET_DEVICE_OP(dev_ops, detach_mcast); + SET_DEVICE_OP(dev_ops, process_mad); + SET_DEVICE_OP(dev_ops, alloc_xrcd); + SET_DEVICE_OP(dev_ops, dealloc_xrcd); + SET_DEVICE_OP(dev_ops, create_flow); + SET_DEVICE_OP(dev_ops, destroy_flow); + SET_DEVICE_OP(dev_ops, create_flow_action_esp); + SET_DEVICE_OP(dev_ops, destroy_flow_action); + SET_DEVICE_OP(dev_ops, modify_flow_action_esp); + SET_DEVICE_OP(dev_ops, set_vf_link_state); + SET_DEVICE_OP(dev_ops, get_vf_config); + SET_DEVICE_OP(dev_ops, get_vf_stats); + SET_DEVICE_OP(dev_ops, set_vf_guid); + SET_DEVICE_OP(dev_ops, create_wq); + SET_DEVICE_OP(dev_ops, destroy_wq); + SET_DEVICE_OP(dev_ops, modify_wq); + SET_DEVICE_OP(dev_ops, create_rwq_ind_table); + SET_DEVICE_OP(dev_ops, destroy_rwq_ind_table); + SET_DEVICE_OP(dev_ops, alloc_dm); + SET_DEVICE_OP(dev_ops, dealloc_dm); + SET_DEVICE_OP(dev_ops, reg_dm_mr); + SET_DEVICE_OP(dev_ops, create_counters); + SET_DEVICE_OP(dev_ops, destroy_counters); + SET_DEVICE_OP(dev_ops, read_counters); + SET_DEVICE_OP(dev_ops, alloc_hw_stats); + SET_DEVICE_OP(dev_ops, get_hw_stats); } EXPORT_SYMBOL(ib_set_device_ops); diff --git a/drivers/infiniband/core/fmr_pool.c b/drivers/infiniband/core/fmr_pool.c index 83ba0068e8bb..001045f58a50 100644 --- a/drivers/infiniband/core/fmr_pool.c +++ b/drivers/infiniband/core/fmr_pool.c @@ -211,8 +211,8 @@ struct ib_fmr_pool *ib_create_fmr_pool(struct ib_pd *pd, return ERR_PTR(-EINVAL); device = pd->device; - if (!device->alloc_fmr || !device->dealloc_fmr || - !device->map_phys_fmr || !device->unmap_fmr) { + if (!device->ops.alloc_fmr || !device->ops.dealloc_fmr || + !device->ops.map_phys_fmr || !device->ops.unmap_fmr) { dev_info(&device->dev, "Device does not support FMRs\n"); return ERR_PTR(-ENOSYS); } diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c index d7025cd5be28..7433888f4fc0 100644 --- a/drivers/infiniband/core/mad.c +++ b/drivers/infiniband/core/mad.c @@ -888,10 +888,10 @@ static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv, } /* No GRH for DR SMP */ - ret = device->process_mad(device, 0, port_num, &mad_wc, NULL, - (const struct ib_mad_hdr *)smp, mad_size, - (struct ib_mad_hdr *)mad_priv->mad, - &mad_size, &out_mad_pkey_index); + ret = device->ops.process_mad(device, 0, port_num, &mad_wc, NULL, + (const struct ib_mad_hdr *)smp, mad_size, + (struct ib_mad_hdr *)mad_priv->mad, + &mad_size, &out_mad_pkey_index); switch (ret) { case IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY: @@ -2305,14 +2305,14 @@ static void ib_mad_recv_done(struct ib_cq *cq, struct ib_wc *wc) } /* Give driver "right of first refusal" on incoming MAD */ - if (port_priv->device->process_mad) { - ret = port_priv->device->process_mad(port_priv->device, 0, - port_priv->port_num, - wc, &recv->grh, - (const struct ib_mad_hdr *)recv->mad, - recv->mad_size, - (struct ib_mad_hdr *)response->mad, - &mad_size, &resp_mad_pkey_index); + if (port_priv->device->ops.process_mad) { + ret = port_priv->device->ops.process_mad(port_priv->device, 0, + port_priv->port_num, + wc, &recv->grh, + (const struct ib_mad_hdr *)recv->mad, + recv->mad_size, + (struct ib_mad_hdr *)response->mad, + &mad_size, &resp_mad_pkey_index); if (opa) wc->pkey_index = resp_mad_pkey_index; diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c index 573399e3ccc1..ab5c4ddd2b26 100644 --- a/drivers/infiniband/core/nldev.c +++ b/drivers/infiniband/core/nldev.c @@ -259,8 +259,8 @@ static int fill_port_info(struct sk_buff *msg, if (nla_put_u8(msg, RDMA_NLDEV_ATTR_PORT_PHYS_STATE, attr.phys_state)) return -EMSGSIZE; - if (device->get_netdev) - netdev = device->get_netdev(device, port); + if (device->ops.get_netdev) + netdev = device->ops.get_netdev(device, port); if (netdev && net_eq(dev_net(netdev), net)) { ret = nla_put_u32(msg, diff --git a/drivers/infiniband/core/opa_smi.h b/drivers/infiniband/core/opa_smi.h index 3bfab3505a29..af4879bdf3d6 100644 --- a/drivers/infiniband/core/opa_smi.h +++ b/drivers/infiniband/core/opa_smi.h @@ -55,7 +55,7 @@ static inline enum smi_action opa_smi_check_local_smp(struct opa_smp *smp, { /* C14-9:3 -- We're at the end of the DR segment of path */ /* C14-9:4 -- Hop Pointer = Hop Count + 1 -> give to SMA/SM */ - return (device->process_mad && + return (device->ops.process_mad && !opa_get_smp_direction(smp) && (smp->hop_ptr == smp->hop_cnt + 1)) ? IB_SMI_HANDLE : IB_SMI_DISCARD; @@ -70,7 +70,7 @@ static inline enum smi_action opa_smi_check_local_returning_smp(struct opa_smp * { /* C14-13:3 -- We're at the end of the DR segment of path */ /* C14-13:4 -- Hop Pointer == 0 -> give to SM */ - return (device->process_mad && + return (device->ops.process_mad && opa_get_smp_direction(smp) && !smp->hop_ptr) ? IB_SMI_HANDLE : IB_SMI_DISCARD; } diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c index 752a55c6bdce..717b04ccb95a 100644 --- a/drivers/infiniband/core/rdma_core.c +++ b/drivers/infiniband/core/rdma_core.c @@ -812,8 +812,8 @@ static void ufile_destroy_ucontext(struct ib_uverbs_file *ufile, */ if (reason == RDMA_REMOVE_DRIVER_REMOVE) { uverbs_user_mmap_disassociate(ufile); - if (ib_dev->disassociate_ucontext) - ib_dev->disassociate_ucontext(ucontext); + if (ib_dev->ops.disassociate_ucontext) + ib_dev->ops.disassociate_ucontext(ucontext); } ib_rdmacg_uncharge(&ucontext->cg_obj, ib_dev, @@ -823,7 +823,7 @@ static void ufile_destroy_ucontext(struct ib_uverbs_file *ufile, * FIXME: Drivers are not permitted to fail dealloc_ucontext, remove * the error return. */ - ret = ib_dev->dealloc_ucontext(ucontext); + ret = ib_dev->ops.dealloc_ucontext(ucontext); WARN_ON(ret); ufile->ucontext = NULL; diff --git a/drivers/infiniband/core/security.c b/drivers/infiniband/core/security.c index 1143c0448666..1efadbccf394 100644 --- a/drivers/infiniband/core/security.c +++ b/drivers/infiniband/core/security.c @@ -626,10 +626,10 @@ int ib_security_modify_qp(struct ib_qp *qp, } if (!ret) - ret = real_qp->device->modify_qp(real_qp, - qp_attr, - qp_attr_mask, - udata); + ret = real_qp->device->ops.modify_qp(real_qp, + qp_attr, + qp_attr_mask, + udata); if (new_pps) { /* Clean up the lists and free the appropriate diff --git a/drivers/infiniband/core/smi.h b/drivers/infiniband/core/smi.h index 33c91c8a16e9..91d9b353ab85 100644 --- a/drivers/infiniband/core/smi.h +++ b/drivers/infiniband/core/smi.h @@ -67,7 +67,7 @@ static inline enum smi_action smi_check_local_smp(struct ib_smp *smp, { /* C14-9:3 -- We're at the end of the DR segment of path */ /* C14-9:4 -- Hop Pointer = Hop Count + 1 -> give to SMA/SM */ - return ((device->process_mad && + return ((device->ops.process_mad && !ib_get_smp_direction(smp) && (smp->hop_ptr == smp->hop_cnt + 1)) ? IB_SMI_HANDLE : IB_SMI_DISCARD); @@ -82,7 +82,7 @@ static inline enum smi_action smi_check_local_returning_smp(struct ib_smp *smp, { /* C14-13:3 -- We're at the end of the DR segment of path */ /* C14-13:4 -- Hop Pointer == 0 -> give to SM */ - return ((device->process_mad && + return ((device->ops.process_mad && ib_get_smp_direction(smp) && !smp->hop_ptr) ? IB_SMI_HANDLE : IB_SMI_DISCARD); } diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c index 6fcce2c206c6..80f68eb0ba5c 100644 --- a/drivers/infiniband/core/sysfs.c +++ b/drivers/infiniband/core/sysfs.c @@ -462,7 +462,7 @@ static int get_perf_mad(struct ib_device *dev, int port_num, __be16 attr, u16 out_mad_pkey_index = 0; ssize_t ret; - if (!dev->process_mad) + if (!dev->ops.process_mad) return -ENOSYS; in_mad = kzalloc(sizeof *in_mad, GFP_KERNEL); @@ -481,11 +481,11 @@ static int get_perf_mad(struct ib_device *dev, int port_num, __be16 attr, if (attr != IB_PMA_CLASS_PORT_INFO) in_mad->data[41] = port_num; /* PortSelect field */ - if ((dev->process_mad(dev, IB_MAD_IGNORE_MKEY, - port_num, NULL, NULL, - (const struct ib_mad_hdr *)in_mad, mad_size, - (struct ib_mad_hdr *)out_mad, &mad_size, - &out_mad_pkey_index) & + if ((dev->ops.process_mad(dev, IB_MAD_IGNORE_MKEY, + port_num, NULL, NULL, + (const struct ib_mad_hdr *)in_mad, mad_size, + (struct ib_mad_hdr *)out_mad, &mad_size, + &out_mad_pkey_index) & (IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY)) != (IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY)) { ret = -EINVAL; @@ -786,7 +786,7 @@ static int update_hw_stats(struct ib_device *dev, struct rdma_hw_stats *stats, if (time_is_after_eq_jiffies(stats->timestamp + stats->lifespan)) return 0; - ret = dev->get_hw_stats(dev, stats, port_num, index); + ret = dev->ops.get_hw_stats(dev, stats, port_num, index); if (ret < 0) return ret; if (ret == stats->num_counters) @@ -946,7 +946,7 @@ static void setup_hw_stats(struct ib_device *device, struct ib_port *port, struct rdma_hw_stats *stats; int i, ret; - stats = device->alloc_hw_stats(device, port_num); + stats = device->ops.alloc_hw_stats(device, port_num); if (!stats) return; @@ -964,8 +964,8 @@ static void setup_hw_stats(struct ib_device *device, struct ib_port *port, if (!hsag) goto err_free_stats; - ret = device->get_hw_stats(device, stats, port_num, - stats->num_counters); + ret = device->ops.get_hw_stats(device, stats, port_num, + stats->num_counters); if (ret != stats->num_counters) goto err_free_hsag; @@ -1057,7 +1057,7 @@ static int add_port(struct ib_device *device, int port_num, goto err_put; } - if (device->process_mad) { + if (device->ops.process_mad) { p->pma_table = get_counter_table(device, port_num); ret = sysfs_create_group(&p->kobj, p->pma_table); if (ret) @@ -1124,7 +1124,7 @@ static int add_port(struct ib_device *device, int port_num, * port, so holder should be device. Therefore skip per port conunter * initialization. */ - if (device->alloc_hw_stats && port_num) + if (device->ops.alloc_hw_stats && port_num) setup_hw_stats(device, p, port_num); list_add_tail(&p->kobj.entry, &device->port_list); @@ -1245,7 +1245,7 @@ static ssize_t node_desc_store(struct device *device, struct ib_device_modify desc = {}; int ret; - if (!dev->modify_device) + if (!dev->ops.modify_device) return -EIO; memcpy(desc.node_desc, buf, min_t(int, count, IB_DEVICE_NODE_DESC_MAX)); @@ -1341,7 +1341,7 @@ int ib_device_register_sysfs(struct ib_device *device, } } - if (device->alloc_hw_stats) + if (device->ops.alloc_hw_stats) setup_hw_stats(device, NULL, 0); return 0; diff --git a/drivers/infiniband/core/ucm.c b/drivers/infiniband/core/ucm.c index faa9e6116b2f..f35580fd0d3a 100644 --- a/drivers/infiniband/core/ucm.c +++ b/drivers/infiniband/core/ucm.c @@ -1239,7 +1239,7 @@ static void ib_ucm_add_one(struct ib_device *device) dev_t base; struct ib_ucm_device *ucm_dev; - if (!device->alloc_ucontext || !rdma_cap_ib_cm(device, 1)) + if (!device->ops.alloc_ucontext || !rdma_cap_ib_cm(device, 1)) return; ucm_dev = kzalloc(sizeof *ucm_dev, GFP_KERNEL); diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index a93853770e3c..12906798ef0e 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c @@ -106,7 +106,7 @@ ssize_t ib_uverbs_get_context(struct ib_uverbs_file *file, if (ret) goto err; - ucontext = ib_dev->alloc_ucontext(ib_dev, &udata); + ucontext = ib_dev->ops.alloc_ucontext(ib_dev, &udata); if (IS_ERR(ucontext)) { ret = PTR_ERR(ucontext); goto err_alloc; @@ -166,7 +166,7 @@ ssize_t ib_uverbs_get_context(struct ib_uverbs_file *file, put_unused_fd(resp.async_fd); err_free: - ib_dev->dealloc_ucontext(ucontext); + ib_dev->ops.dealloc_ucontext(ucontext); err_alloc: ib_rdmacg_uncharge(&cg_obj, ib_dev, RDMACG_RESOURCE_HCA_HANDLE); @@ -364,7 +364,7 @@ ssize_t ib_uverbs_alloc_pd(struct ib_uverbs_file *file, if (IS_ERR(uobj)) return PTR_ERR(uobj); - pd = ib_dev->alloc_pd(ib_dev, uobj->context, &udata); + pd = ib_dev->ops.alloc_pd(ib_dev, uobj->context, &udata); if (IS_ERR(pd)) { ret = PTR_ERR(pd); goto err; @@ -552,7 +552,8 @@ ssize_t ib_uverbs_open_xrcd(struct ib_uverbs_file *file, } if (!xrcd) { - xrcd = ib_dev->alloc_xrcd(ib_dev, obj->uobject.context, &udata); + xrcd = ib_dev->ops.alloc_xrcd(ib_dev, obj->uobject.context, + &udata); if (IS_ERR(xrcd)) { ret = PTR_ERR(xrcd); goto err; @@ -703,8 +704,8 @@ ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file, } } - mr = pd->device->reg_user_mr(pd, cmd.start, cmd.length, cmd.hca_va, - cmd.access_flags, &udata); + mr = pd->device->ops.reg_user_mr(pd, cmd.start, cmd.length, cmd.hca_va, + cmd.access_flags, &udata); if (IS_ERR(mr)) { ret = PTR_ERR(mr); goto err_put; @@ -804,9 +805,9 @@ ssize_t ib_uverbs_rereg_mr(struct ib_uverbs_file *file, } old_pd = mr->pd; - ret = mr->device->rereg_user_mr(mr, cmd.flags, cmd.start, - cmd.length, cmd.hca_va, - cmd.access_flags, pd, &udata); + ret = mr->device->ops.rereg_user_mr(mr, cmd.flags, cmd.start, + cmd.length, cmd.hca_va, + cmd.access_flags, pd, &udata); if (!ret) { if (cmd.flags & IB_MR_REREG_PD) { atomic_inc(&pd->usecnt); @@ -883,7 +884,7 @@ ssize_t ib_uverbs_alloc_mw(struct ib_uverbs_file *file, in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), out_len - sizeof(resp)); - mw = pd->device->alloc_mw(pd, cmd.mw_type, &udata); + mw = pd->device->ops.alloc_mw(pd, cmd.mw_type, &udata); if (IS_ERR(mw)) { ret = PTR_ERR(mw); goto err_put; @@ -992,7 +993,7 @@ static struct ib_ucq_object *create_cq(struct ib_uverbs_file *file, if (IS_ERR(obj)) return obj; - if (!ib_dev->create_cq) { + if (!ib_dev->ops.create_cq) { ret = -EOPNOTSUPP; goto err; } @@ -1017,7 +1018,7 @@ static struct ib_ucq_object *create_cq(struct ib_uverbs_file *file, if (cmd_sz > offsetof(typeof(*cmd), flags) + sizeof(cmd->flags)) attr.flags = cmd->flags; - cq = ib_dev->create_cq(ib_dev, &attr, obj->uobject.context, uhw); + cq = ib_dev->ops.create_cq(ib_dev, &attr, obj->uobject.context, uhw); if (IS_ERR(cq)) { ret = PTR_ERR(cq); goto err_file; @@ -1182,7 +1183,7 @@ ssize_t ib_uverbs_resize_cq(struct ib_uverbs_file *file, if (!cq) return -EINVAL; - ret = cq->device->resize_cq(cq, cmd.cqe, &udata); + ret = cq->device->ops.resize_cq(cq, cmd.cqe, &udata); if (ret) goto out; @@ -2350,7 +2351,7 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, } resp.bad_wr = 0; - ret = qp->device->post_send(qp->real_qp, wr, &bad_wr); + ret = qp->device->ops.post_send(qp->real_qp, wr, &bad_wr); if (ret) for (next = wr; next; next = next->next) { ++resp.bad_wr; @@ -2495,7 +2496,7 @@ ssize_t ib_uverbs_post_recv(struct ib_uverbs_file *file, goto out; resp.bad_wr = 0; - ret = qp->device->post_recv(qp->real_qp, wr, &bad_wr); + ret = qp->device->ops.post_recv(qp->real_qp, wr, &bad_wr); uobj_put_obj_read(qp); if (ret) { @@ -2544,8 +2545,8 @@ ssize_t ib_uverbs_post_srq_recv(struct ib_uverbs_file *file, goto out; resp.bad_wr = 0; - ret = srq->device->post_srq_recv ? - srq->device->post_srq_recv(srq, wr, &bad_wr) : -EOPNOTSUPP; + ret = srq->device->ops.post_srq_recv ? + srq->device->ops.post_srq_recv(srq, wr, &bad_wr) : -EOPNOTSUPP; uobj_put_obj_read(srq); @@ -3147,11 +3148,11 @@ int ib_uverbs_ex_create_wq(struct ib_uverbs_file *file, obj->uevent.events_reported = 0; INIT_LIST_HEAD(&obj->uevent.event_list); - if (!pd->device->create_wq) { + if (!pd->device->ops.create_wq) { err = -EOPNOTSUPP; goto err_put_cq; } - wq = pd->device->create_wq(pd, &wq_init_attr, uhw); + wq = pd->device->ops.create_wq(pd, &wq_init_attr, uhw); if (IS_ERR(wq)) { err = PTR_ERR(wq); goto err_put_cq; @@ -3282,11 +3283,11 @@ int ib_uverbs_ex_modify_wq(struct ib_uverbs_file *file, wq_attr.flags = cmd.flags; wq_attr.flags_mask = cmd.flags_mask; } - if (!wq->device->modify_wq) { + if (!wq->device->ops.modify_wq) { ret = -EOPNOTSUPP; goto out; } - ret = wq->device->modify_wq(wq, &wq_attr, cmd.attr_mask, uhw); + ret = wq->device->ops.modify_wq(wq, &wq_attr, cmd.attr_mask, uhw); out: uobj_put_obj_read(wq); return ret; @@ -3385,11 +3386,12 @@ int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, init_attr.log_ind_tbl_size = cmd.log_ind_tbl_size; init_attr.ind_tbl = wqs; - if (!ib_dev->create_rwq_ind_table) { + if (!ib_dev->ops.create_rwq_ind_table) { err = -EOPNOTSUPP; goto err_uobj; } - rwq_ind_tbl = ib_dev->create_rwq_ind_table(ib_dev, &init_attr, uhw); + rwq_ind_tbl = ib_dev->ops.create_rwq_ind_table(ib_dev, + &init_attr, uhw); if (IS_ERR(rwq_ind_tbl)) { err = PTR_ERR(rwq_ind_tbl); @@ -3553,7 +3555,7 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, goto err_put; } - if (!qp->device->create_flow) { + if (!qp->device->ops.create_flow) { err = -EOPNOTSUPP; goto err_put; } @@ -3602,8 +3604,8 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, goto err_free; } - flow_id = qp->device->create_flow(qp, flow_attr, - IB_FLOW_DOMAIN_USER, uhw); + flow_id = qp->device->ops.create_flow(qp, flow_attr, + IB_FLOW_DOMAIN_USER, uhw); if (IS_ERR(flow_id)) { err = PTR_ERR(flow_id); @@ -3626,7 +3628,7 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, kfree(kern_flow_attr); return uobj_alloc_commit(uobj, 0); err_copy: - if (!qp->device->destroy_flow(flow_id)) + if (!qp->device->ops.destroy_flow(flow_id)) atomic_dec(&qp->usecnt); err_free: ib_uverbs_flow_resources_free(uflow_res); @@ -3727,7 +3729,7 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file, obj->uevent.events_reported = 0; INIT_LIST_HEAD(&obj->uevent.event_list); - srq = pd->device->create_srq(pd, &attr, udata); + srq = pd->device->ops.create_srq(pd, &attr, udata); if (IS_ERR(srq)) { ret = PTR_ERR(srq); goto err_put; @@ -3885,7 +3887,7 @@ ssize_t ib_uverbs_modify_srq(struct ib_uverbs_file *file, attr.max_wr = cmd.max_wr; attr.srq_limit = cmd.srq_limit; - ret = srq->device->modify_srq(srq, &attr, cmd.attr_mask, &udata); + ret = srq->device->ops.modify_srq(srq, &attr, cmd.attr_mask, &udata); uobj_put_obj_read(srq); @@ -3975,7 +3977,7 @@ int ib_uverbs_ex_query_device(struct ib_uverbs_file *file, return PTR_ERR(ucontext); ib_dev = ucontext->device; - if (!ib_dev->query_device) + if (!ib_dev->ops.query_device) return -EOPNOTSUPP; if (ucore->inlen < sizeof(cmd)) @@ -3996,7 +3998,7 @@ int ib_uverbs_ex_query_device(struct ib_uverbs_file *file, if (ucore->outlen < resp.response_length) return -ENOSPC; - err = ib_dev->query_device(ib_dev, &attr, uhw); + err = ib_dev->ops.query_device(ib_dev, &attr, uhw); if (err) return err; diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c index 6d373f5515b7..074dae4fb9ac 100644 --- a/drivers/infiniband/core/uverbs_main.c +++ b/drivers/infiniband/core/uverbs_main.c @@ -164,7 +164,7 @@ int uverbs_dealloc_mw(struct ib_mw *mw) struct ib_pd *pd = mw->pd; int ret; - ret = mw->device->dealloc_mw(mw); + ret = mw->device->ops.dealloc_mw(mw); if (!ret) atomic_dec(&pd->usecnt); return ret; @@ -255,7 +255,7 @@ void ib_uverbs_release_file(struct kref *ref) srcu_key = srcu_read_lock(&file->device->disassociate_srcu); ib_dev = srcu_dereference(file->device->ib_dev, &file->device->disassociate_srcu); - if (ib_dev && !ib_dev->disassociate_ucontext) + if (ib_dev && !ib_dev->ops.disassociate_ucontext) module_put(ib_dev->owner); srcu_read_unlock(&file->device->disassociate_srcu, srcu_key); @@ -807,7 +807,7 @@ static int ib_uverbs_mmap(struct file *filp, struct vm_area_struct *vma) goto out; } - ret = ucontext->device->mmap(ucontext, vma); + ret = ucontext->device->ops.mmap(ucontext, vma); out: srcu_read_unlock(&file->device->disassociate_srcu, srcu_key); return ret; @@ -1069,7 +1069,7 @@ static int ib_uverbs_open(struct inode *inode, struct file *filp) /* In case IB device supports disassociate ucontext, there is no hard * dependency between uverbs device and its low level device. */ - module_dependent = !(ib_dev->disassociate_ucontext); + module_dependent = !(ib_dev->ops.disassociate_ucontext); if (module_dependent) { if (!try_module_get(ib_dev->owner)) { @@ -1239,7 +1239,7 @@ static void ib_uverbs_add_one(struct ib_device *device) struct ib_uverbs_device *uverbs_dev; int ret; - if (!device->alloc_ucontext) + if (!device->ops.alloc_ucontext) return; uverbs_dev = kzalloc(sizeof(*uverbs_dev), GFP_KERNEL); @@ -1285,7 +1285,7 @@ static void ib_uverbs_add_one(struct ib_device *device) dev_set_name(&uverbs_dev->dev, "uverbs%d", uverbs_dev->devnum); cdev_init(&uverbs_dev->cdev, - device->mmap ? &uverbs_mmap_fops : &uverbs_fops); + device->ops.mmap ? &uverbs_mmap_fops : &uverbs_fops); uverbs_dev->cdev.owner = THIS_MODULE; ret = cdev_device_add(&uverbs_dev->cdev, &uverbs_dev->dev); @@ -1373,7 +1373,7 @@ static void ib_uverbs_remove_one(struct ib_device *device, void *client_data) cdev_device_del(&uverbs_dev->cdev, &uverbs_dev->dev); ida_free(&uverbs_ida, uverbs_dev->devnum); - if (device->disassociate_ucontext) { + if (device->ops.disassociate_ucontext) { /* We disassociate HW resources and immediately return. * Userspace will see a EIO errno for all future access. * Upon returning, ib_device may be freed internally and is not diff --git a/drivers/infiniband/core/uverbs_std_types.c b/drivers/infiniband/core/uverbs_std_types.c index 203cc96ac6f5..dd433387907e 100644 --- a/drivers/infiniband/core/uverbs_std_types.c +++ b/drivers/infiniband/core/uverbs_std_types.c @@ -54,7 +54,7 @@ static int uverbs_free_flow(struct ib_uobject *uobject, struct ib_qp *qp = flow->qp; int ret; - ret = flow->device->destroy_flow(flow); + ret = flow->device->ops.destroy_flow(flow); if (!ret) { if (qp) atomic_dec(&qp->usecnt); diff --git a/drivers/infiniband/core/uverbs_std_types_counters.c b/drivers/infiniband/core/uverbs_std_types_counters.c index a0ffdcf9a51c..11c15712fb8c 100644 --- a/drivers/infiniband/core/uverbs_std_types_counters.c +++ b/drivers/infiniband/core/uverbs_std_types_counters.c @@ -44,7 +44,7 @@ static int uverbs_free_counters(struct ib_uobject *uobject, if (ret) return ret; - return counters->device->destroy_counters(counters); + return counters->device->ops.destroy_counters(counters); } static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_CREATE)( @@ -61,10 +61,10 @@ static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_CREATE)( * have the ability to remove methods from parse tree once * such condition is met. */ - if (!ib_dev->create_counters) + if (!ib_dev->ops.create_counters) return -EOPNOTSUPP; - counters = ib_dev->create_counters(ib_dev, attrs); + counters = ib_dev->ops.create_counters(ib_dev, attrs); if (IS_ERR(counters)) { ret = PTR_ERR(counters); goto err_create_counters; @@ -90,7 +90,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_READ)( uverbs_attr_get_obj(attrs, UVERBS_ATTR_READ_COUNTERS_HANDLE); int ret; - if (!counters->device->read_counters) + if (!counters->device->ops.read_counters) return -EOPNOTSUPP; if (!atomic_read(&counters->usecnt)) @@ -109,7 +109,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_READ)( if (IS_ERR(read_attr.counters_buff)) return PTR_ERR(read_attr.counters_buff); - ret = counters->device->read_counters(counters, &read_attr, attrs); + ret = counters->device->ops.read_counters(counters, &read_attr, attrs); if (ret) return ret; diff --git a/drivers/infiniband/core/uverbs_std_types_cq.c b/drivers/infiniband/core/uverbs_std_types_cq.c index 5b5f2052cd52..5e396bae9649 100644 --- a/drivers/infiniband/core/uverbs_std_types_cq.c +++ b/drivers/infiniband/core/uverbs_std_types_cq.c @@ -72,7 +72,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_CQ_CREATE)( struct ib_uverbs_completion_event_file *ev_file = NULL; struct ib_uobject *ev_file_uobj; - if (!ib_dev->create_cq || !ib_dev->destroy_cq) + if (!ib_dev->ops.create_cq || !ib_dev->ops.destroy_cq) return -EOPNOTSUPP; ret = uverbs_copy_from(&attr.comp_vector, attrs, @@ -114,7 +114,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_CQ_CREATE)( /* Temporary, only until drivers get the new uverbs_attr_bundle */ create_udata(attrs, &uhw); - cq = ib_dev->create_cq(ib_dev, &attr, obj->uobject.context, &uhw); + cq = ib_dev->ops.create_cq(ib_dev, &attr, obj->uobject.context, &uhw); if (IS_ERR(cq)) { ret = PTR_ERR(cq); goto err_event_file; diff --git a/drivers/infiniband/core/uverbs_std_types_dm.c b/drivers/infiniband/core/uverbs_std_types_dm.c index edc3ff7733d4..73747943cdab 100644 --- a/drivers/infiniband/core/uverbs_std_types_dm.c +++ b/drivers/infiniband/core/uverbs_std_types_dm.c @@ -43,7 +43,7 @@ static int uverbs_free_dm(struct ib_uobject *uobject, if (ret) return ret; - return dm->device->dealloc_dm(dm); + return dm->device->ops.dealloc_dm(dm); } static int @@ -58,7 +58,7 @@ UVERBS_HANDLER(UVERBS_METHOD_DM_ALLOC)(struct ib_uverbs_file *file, struct ib_dm *dm; int ret; - if (!ib_dev->alloc_dm) + if (!ib_dev->ops.alloc_dm) return -EOPNOTSUPP; ret = uverbs_copy_from(&attr.length, attrs, @@ -71,7 +71,7 @@ UVERBS_HANDLER(UVERBS_METHOD_DM_ALLOC)(struct ib_uverbs_file *file, if (ret) return ret; - dm = ib_dev->alloc_dm(ib_dev, uobj->context, &attr, attrs); + dm = ib_dev->ops.alloc_dm(ib_dev, uobj->context, &attr, attrs); if (IS_ERR(dm)) return PTR_ERR(dm); diff --git a/drivers/infiniband/core/uverbs_std_types_flow_action.c b/drivers/infiniband/core/uverbs_std_types_flow_action.c index cb9486ad5c67..99d49153b621 100644 --- a/drivers/infiniband/core/uverbs_std_types_flow_action.c +++ b/drivers/infiniband/core/uverbs_std_types_flow_action.c @@ -43,7 +43,7 @@ static int uverbs_free_flow_action(struct ib_uobject *uobject, if (ret) return ret; - return action->device->destroy_flow_action(action); + return action->device->ops.destroy_flow_action(action); } static u64 esp_flags_uverbs_to_verbs(struct uverbs_attr_bundle *attrs, @@ -314,7 +314,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_CREATE)( struct ib_flow_action *action; struct ib_flow_action_esp_attr esp_attr = {}; - if (!ib_dev->create_flow_action_esp) + if (!ib_dev->ops.create_flow_action_esp) return -EOPNOTSUPP; ret = parse_flow_action_esp(ib_dev, file, attrs, &esp_attr, false); @@ -322,7 +322,8 @@ static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_CREATE)( return ret; /* No need to check as this attribute is marked as MANDATORY */ - action = ib_dev->create_flow_action_esp(ib_dev, &esp_attr.hdr, attrs); + action = ib_dev->ops.create_flow_action_esp(ib_dev, &esp_attr.hdr, + attrs); if (IS_ERR(action)) return PTR_ERR(action); @@ -341,7 +342,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_MODIFY)( int ret; struct ib_flow_action_esp_attr esp_attr = {}; - if (!action->device->modify_flow_action_esp) + if (!action->device->ops.modify_flow_action_esp) return -EOPNOTSUPP; ret = parse_flow_action_esp(action->device, file, attrs, &esp_attr, @@ -352,8 +353,9 @@ static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_MODIFY)( if (action->type != IB_FLOW_ACTION_ESP) return -EINVAL; - return action->device->modify_flow_action_esp(action, &esp_attr.hdr, - attrs); + return action->device->ops.modify_flow_action_esp(action, + &esp_attr.hdr, + attrs); } static const struct uverbs_attr_spec uverbs_flow_action_esp_keymat[] = { diff --git a/drivers/infiniband/core/uverbs_std_types_mr.c b/drivers/infiniband/core/uverbs_std_types_mr.c index cf02e774303e..8b06da9c9edb 100644 --- a/drivers/infiniband/core/uverbs_std_types_mr.c +++ b/drivers/infiniband/core/uverbs_std_types_mr.c @@ -54,7 +54,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_DM_MR_REG)( struct ib_mr *mr; int ret; - if (!ib_dev->reg_dm_mr) + if (!ib_dev->ops.reg_dm_mr) return -EOPNOTSUPP; ret = uverbs_copy_from(&attr.offset, attrs, UVERBS_ATTR_REG_DM_MR_OFFSET); @@ -83,7 +83,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_DM_MR_REG)( attr.length > dm->length - attr.offset) return -EINVAL; - mr = pd->device->reg_dm_mr(pd, dm, &attr, attrs); + mr = pd->device->ops.reg_dm_mr(pd, dm, &attr, attrs); if (IS_ERR(mr)) return PTR_ERR(mr); diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 65a7e0b44ad7..85d475a6d388 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -214,8 +214,8 @@ EXPORT_SYMBOL(rdma_node_get_transport); enum rdma_link_layer rdma_port_get_link_layer(struct ib_device *device, u8 port_num) { enum rdma_transport_type lt; - if (device->get_link_layer) - return device->get_link_layer(device, port_num); + if (device->ops.get_link_layer) + return device->ops.get_link_layer(device, port_num); lt = rdma_node_get_transport(device->node_type); if (lt == RDMA_TRANSPORT_IB) @@ -243,7 +243,7 @@ struct ib_pd *__ib_alloc_pd(struct ib_device *device, unsigned int flags, struct ib_pd *pd; int mr_access_flags = 0; - pd = device->alloc_pd(device, NULL, NULL); + pd = device->ops.alloc_pd(device, NULL, NULL); if (IS_ERR(pd)) return pd; @@ -270,7 +270,7 @@ struct ib_pd *__ib_alloc_pd(struct ib_device *device, unsigned int flags, if (mr_access_flags) { struct ib_mr *mr; - mr = pd->device->get_dma_mr(pd, mr_access_flags); + mr = pd->device->ops.get_dma_mr(pd, mr_access_flags); if (IS_ERR(mr)) { ib_dealloc_pd(pd); return ERR_CAST(mr); @@ -307,7 +307,7 @@ void ib_dealloc_pd(struct ib_pd *pd) int ret; if (pd->__internal_mr) { - ret = pd->device->dereg_mr(pd->__internal_mr); + ret = pd->device->ops.dereg_mr(pd->__internal_mr); WARN_ON(ret); pd->__internal_mr = NULL; } @@ -319,7 +319,7 @@ void ib_dealloc_pd(struct ib_pd *pd) rdma_restrack_del(&pd->res); /* Making delalloc_pd a void return is a WIP, no driver should return an error here. */ - ret = pd->device->dealloc_pd(pd); + ret = pd->device->ops.dealloc_pd(pd); WARN_ONCE(ret, "Infiniband HW driver failed dealloc_pd"); } EXPORT_SYMBOL(ib_dealloc_pd); @@ -479,10 +479,10 @@ static struct ib_ah *_rdma_create_ah(struct ib_pd *pd, { struct ib_ah *ah; - if (!pd->device->create_ah) + if (!pd->device->ops.create_ah) return ERR_PTR(-EOPNOTSUPP); - ah = pd->device->create_ah(pd, ah_attr, udata); + ah = pd->device->ops.create_ah(pd, ah_attr, udata); if (!IS_ERR(ah)) { ah->device = pd->device; @@ -888,8 +888,8 @@ int rdma_modify_ah(struct ib_ah *ah, struct rdma_ah_attr *ah_attr) if (ret) return ret; - ret = ah->device->modify_ah ? - ah->device->modify_ah(ah, ah_attr) : + ret = ah->device->ops.modify_ah ? + ah->device->ops.modify_ah(ah, ah_attr) : -EOPNOTSUPP; ah->sgid_attr = rdma_update_sgid_attr(ah_attr, ah->sgid_attr); @@ -902,8 +902,8 @@ int rdma_query_ah(struct ib_ah *ah, struct rdma_ah_attr *ah_attr) { ah_attr->grh.sgid_attr = NULL; - return ah->device->query_ah ? - ah->device->query_ah(ah, ah_attr) : + return ah->device->ops.query_ah ? + ah->device->ops.query_ah(ah, ah_attr) : -EOPNOTSUPP; } EXPORT_SYMBOL(rdma_query_ah); @@ -915,7 +915,7 @@ int rdma_destroy_ah(struct ib_ah *ah) int ret; pd = ah->pd; - ret = ah->device->destroy_ah(ah); + ret = ah->device->ops.destroy_ah(ah); if (!ret) { atomic_dec(&pd->usecnt); if (sgid_attr) @@ -933,10 +933,10 @@ struct ib_srq *ib_create_srq(struct ib_pd *pd, { struct ib_srq *srq; - if (!pd->device->create_srq) + if (!pd->device->ops.create_srq) return ERR_PTR(-EOPNOTSUPP); - srq = pd->device->create_srq(pd, srq_init_attr, NULL); + srq = pd->device->ops.create_srq(pd, srq_init_attr, NULL); if (!IS_ERR(srq)) { srq->device = pd->device; @@ -965,17 +965,17 @@ int ib_modify_srq(struct ib_srq *srq, struct ib_srq_attr *srq_attr, enum ib_srq_attr_mask srq_attr_mask) { - return srq->device->modify_srq ? - srq->device->modify_srq(srq, srq_attr, srq_attr_mask, NULL) : - -EOPNOTSUPP; + return srq->device->ops.modify_srq ? + srq->device->ops.modify_srq(srq, srq_attr, srq_attr_mask, + NULL) : -EOPNOTSUPP; } EXPORT_SYMBOL(ib_modify_srq); int ib_query_srq(struct ib_srq *srq, struct ib_srq_attr *srq_attr) { - return srq->device->query_srq ? - srq->device->query_srq(srq, srq_attr) : -EOPNOTSUPP; + return srq->device->ops.query_srq ? + srq->device->ops.query_srq(srq, srq_attr) : -EOPNOTSUPP; } EXPORT_SYMBOL(ib_query_srq); @@ -997,7 +997,7 @@ int ib_destroy_srq(struct ib_srq *srq) if (srq_type == IB_SRQT_XRC) xrcd = srq->ext.xrc.xrcd; - ret = srq->device->destroy_srq(srq); + ret = srq->device->ops.destroy_srq(srq); if (!ret) { atomic_dec(&pd->usecnt); if (srq_type == IB_SRQT_XRC) @@ -1106,7 +1106,7 @@ static struct ib_qp *ib_create_xrc_qp(struct ib_qp *qp, if (!IS_ERR(qp)) __ib_insert_xrcd_qp(qp_init_attr->xrcd, real_qp); else - real_qp->device->destroy_qp(real_qp); + real_qp->device->ops.destroy_qp(real_qp); return qp; } @@ -1692,10 +1692,10 @@ int ib_get_eth_speed(struct ib_device *dev, u8 port_num, u8 *speed, u8 *width) if (rdma_port_get_link_layer(dev, port_num) != IB_LINK_LAYER_ETHERNET) return -EINVAL; - if (!dev->get_netdev) + if (!dev->ops.get_netdev) return -EOPNOTSUPP; - netdev = dev->get_netdev(dev, port_num); + netdev = dev->ops.get_netdev(dev, port_num); if (!netdev) return -ENODEV; @@ -1753,9 +1753,9 @@ int ib_query_qp(struct ib_qp *qp, qp_attr->ah_attr.grh.sgid_attr = NULL; qp_attr->alt_ah_attr.grh.sgid_attr = NULL; - return qp->device->query_qp ? - qp->device->query_qp(qp->real_qp, qp_attr, qp_attr_mask, qp_init_attr) : - -EOPNOTSUPP; + return qp->device->ops.query_qp ? + qp->device->ops.query_qp(qp->real_qp, qp_attr, qp_attr_mask, + qp_init_attr) : -EOPNOTSUPP; } EXPORT_SYMBOL(ib_query_qp); @@ -1841,7 +1841,7 @@ int ib_destroy_qp(struct ib_qp *qp) rdma_rw_cleanup_mrs(qp); rdma_restrack_del(&qp->res); - ret = qp->device->destroy_qp(qp); + ret = qp->device->ops.destroy_qp(qp); if (!ret) { if (alt_path_sgid_attr) rdma_put_gid_attr(alt_path_sgid_attr); @@ -1879,7 +1879,7 @@ struct ib_cq *__ib_create_cq(struct ib_device *device, { struct ib_cq *cq; - cq = device->create_cq(device, cq_attr, NULL, NULL); + cq = device->ops.create_cq(device, cq_attr, NULL, NULL); if (!IS_ERR(cq)) { cq->device = device; @@ -1899,8 +1899,9 @@ EXPORT_SYMBOL(__ib_create_cq); int rdma_set_cq_moderation(struct ib_cq *cq, u16 cq_count, u16 cq_period) { - return cq->device->modify_cq ? - cq->device->modify_cq(cq, cq_count, cq_period) : -EOPNOTSUPP; + return cq->device->ops.modify_cq ? + cq->device->ops.modify_cq(cq, cq_count, + cq_period) : -EOPNOTSUPP; } EXPORT_SYMBOL(rdma_set_cq_moderation); @@ -1910,14 +1911,14 @@ int ib_destroy_cq(struct ib_cq *cq) return -EBUSY; rdma_restrack_del(&cq->res); - return cq->device->destroy_cq(cq); + return cq->device->ops.destroy_cq(cq); } EXPORT_SYMBOL(ib_destroy_cq); int ib_resize_cq(struct ib_cq *cq, int cqe) { - return cq->device->resize_cq ? - cq->device->resize_cq(cq, cqe, NULL) : -EOPNOTSUPP; + return cq->device->ops.resize_cq ? + cq->device->ops.resize_cq(cq, cqe, NULL) : -EOPNOTSUPP; } EXPORT_SYMBOL(ib_resize_cq); @@ -1930,7 +1931,7 @@ int ib_dereg_mr(struct ib_mr *mr) int ret; rdma_restrack_del(&mr->res); - ret = mr->device->dereg_mr(mr); + ret = mr->device->ops.dereg_mr(mr); if (!ret) { atomic_dec(&pd->usecnt); if (dm) @@ -1959,10 +1960,10 @@ struct ib_mr *ib_alloc_mr(struct ib_pd *pd, { struct ib_mr *mr; - if (!pd->device->alloc_mr) + if (!pd->device->ops.alloc_mr) return ERR_PTR(-EOPNOTSUPP); - mr = pd->device->alloc_mr(pd, mr_type, max_num_sg); + mr = pd->device->ops.alloc_mr(pd, mr_type, max_num_sg); if (!IS_ERR(mr)) { mr->device = pd->device; mr->pd = pd; @@ -1986,10 +1987,10 @@ struct ib_fmr *ib_alloc_fmr(struct ib_pd *pd, { struct ib_fmr *fmr; - if (!pd->device->alloc_fmr) + if (!pd->device->ops.alloc_fmr) return ERR_PTR(-EOPNOTSUPP); - fmr = pd->device->alloc_fmr(pd, mr_access_flags, fmr_attr); + fmr = pd->device->ops.alloc_fmr(pd, mr_access_flags, fmr_attr); if (!IS_ERR(fmr)) { fmr->device = pd->device; fmr->pd = pd; @@ -2008,7 +2009,7 @@ int ib_unmap_fmr(struct list_head *fmr_list) return 0; fmr = list_entry(fmr_list->next, struct ib_fmr, list); - return fmr->device->unmap_fmr(fmr_list); + return fmr->device->ops.unmap_fmr(fmr_list); } EXPORT_SYMBOL(ib_unmap_fmr); @@ -2018,7 +2019,7 @@ int ib_dealloc_fmr(struct ib_fmr *fmr) int ret; pd = fmr->pd; - ret = fmr->device->dealloc_fmr(fmr); + ret = fmr->device->ops.dealloc_fmr(fmr); if (!ret) atomic_dec(&pd->usecnt); @@ -2070,14 +2071,14 @@ int ib_attach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid) { int ret; - if (!qp->device->attach_mcast) + if (!qp->device->ops.attach_mcast) return -EOPNOTSUPP; if (!rdma_is_multicast_addr((struct in6_addr *)gid->raw) || qp->qp_type != IB_QPT_UD || !is_valid_mcast_lid(qp, lid)) return -EINVAL; - ret = qp->device->attach_mcast(qp, gid, lid); + ret = qp->device->ops.attach_mcast(qp, gid, lid); if (!ret) atomic_inc(&qp->usecnt); return ret; @@ -2088,14 +2089,14 @@ int ib_detach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid) { int ret; - if (!qp->device->detach_mcast) + if (!qp->device->ops.detach_mcast) return -EOPNOTSUPP; if (!rdma_is_multicast_addr((struct in6_addr *)gid->raw) || qp->qp_type != IB_QPT_UD || !is_valid_mcast_lid(qp, lid)) return -EINVAL; - ret = qp->device->detach_mcast(qp, gid, lid); + ret = qp->device->ops.detach_mcast(qp, gid, lid); if (!ret) atomic_dec(&qp->usecnt); return ret; @@ -2106,10 +2107,10 @@ struct ib_xrcd *__ib_alloc_xrcd(struct ib_device *device, const char *caller) { struct ib_xrcd *xrcd; - if (!device->alloc_xrcd) + if (!device->ops.alloc_xrcd) return ERR_PTR(-EOPNOTSUPP); - xrcd = device->alloc_xrcd(device, NULL, NULL); + xrcd = device->ops.alloc_xrcd(device, NULL, NULL); if (!IS_ERR(xrcd)) { xrcd->device = device; xrcd->inode = NULL; @@ -2137,7 +2138,7 @@ int ib_dealloc_xrcd(struct ib_xrcd *xrcd) return ret; } - return xrcd->device->dealloc_xrcd(xrcd); + return xrcd->device->ops.dealloc_xrcd(xrcd); } EXPORT_SYMBOL(ib_dealloc_xrcd); @@ -2160,10 +2161,10 @@ struct ib_wq *ib_create_wq(struct ib_pd *pd, { struct ib_wq *wq; - if (!pd->device->create_wq) + if (!pd->device->ops.create_wq) return ERR_PTR(-EOPNOTSUPP); - wq = pd->device->create_wq(pd, wq_attr, NULL); + wq = pd->device->ops.create_wq(pd, wq_attr, NULL); if (!IS_ERR(wq)) { wq->event_handler = wq_attr->event_handler; wq->wq_context = wq_attr->wq_context; @@ -2193,7 +2194,7 @@ int ib_destroy_wq(struct ib_wq *wq) if (atomic_read(&wq->usecnt)) return -EBUSY; - err = wq->device->destroy_wq(wq); + err = wq->device->ops.destroy_wq(wq); if (!err) { atomic_dec(&pd->usecnt); atomic_dec(&cq->usecnt); @@ -2215,10 +2216,10 @@ int ib_modify_wq(struct ib_wq *wq, struct ib_wq_attr *wq_attr, { int err; - if (!wq->device->modify_wq) + if (!wq->device->ops.modify_wq) return -EOPNOTSUPP; - err = wq->device->modify_wq(wq, wq_attr, wq_attr_mask, NULL); + err = wq->device->ops.modify_wq(wq, wq_attr, wq_attr_mask, NULL); return err; } EXPORT_SYMBOL(ib_modify_wq); @@ -2240,12 +2241,12 @@ struct ib_rwq_ind_table *ib_create_rwq_ind_table(struct ib_device *device, int i; u32 table_size; - if (!device->create_rwq_ind_table) + if (!device->ops.create_rwq_ind_table) return ERR_PTR(-EOPNOTSUPP); table_size = (1 << init_attr->log_ind_tbl_size); - rwq_ind_table = device->create_rwq_ind_table(device, - init_attr, NULL); + rwq_ind_table = device->ops.create_rwq_ind_table(device, + init_attr, NULL); if (IS_ERR(rwq_ind_table)) return rwq_ind_table; @@ -2275,7 +2276,7 @@ int ib_destroy_rwq_ind_table(struct ib_rwq_ind_table *rwq_ind_table) if (atomic_read(&rwq_ind_table->usecnt)) return -EBUSY; - err = rwq_ind_table->device->destroy_rwq_ind_table(rwq_ind_table); + err = rwq_ind_table->device->ops.destroy_rwq_ind_table(rwq_ind_table); if (!err) { for (i = 0; i < table_size; i++) atomic_dec(&ind_tbl[i]->usecnt); @@ -2288,48 +2289,50 @@ EXPORT_SYMBOL(ib_destroy_rwq_ind_table); int ib_check_mr_status(struct ib_mr *mr, u32 check_mask, struct ib_mr_status *mr_status) { - return mr->device->check_mr_status ? - mr->device->check_mr_status(mr, check_mask, mr_status) : -EOPNOTSUPP; + if (!mr->device->ops.check_mr_status) + return -EOPNOTSUPP; + + return mr->device->ops.check_mr_status(mr, check_mask, mr_status); } EXPORT_SYMBOL(ib_check_mr_status); int ib_set_vf_link_state(struct ib_device *device, int vf, u8 port, int state) { - if (!device->set_vf_link_state) + if (!device->ops.set_vf_link_state) return -EOPNOTSUPP; - return device->set_vf_link_state(device, vf, port, state); + return device->ops.set_vf_link_state(device, vf, port, state); } EXPORT_SYMBOL(ib_set_vf_link_state); int ib_get_vf_config(struct ib_device *device, int vf, u8 port, struct ifla_vf_info *info) { - if (!device->get_vf_config) + if (!device->ops.get_vf_config) return -EOPNOTSUPP; - return device->get_vf_config(device, vf, port, info); + return device->ops.get_vf_config(device, vf, port, info); } EXPORT_SYMBOL(ib_get_vf_config); int ib_get_vf_stats(struct ib_device *device, int vf, u8 port, struct ifla_vf_stats *stats) { - if (!device->get_vf_stats) + if (!device->ops.get_vf_stats) return -EOPNOTSUPP; - return device->get_vf_stats(device, vf, port, stats); + return device->ops.get_vf_stats(device, vf, port, stats); } EXPORT_SYMBOL(ib_get_vf_stats); int ib_set_vf_guid(struct ib_device *device, int vf, u8 port, u64 guid, int type) { - if (!device->set_vf_guid) + if (!device->ops.set_vf_guid) return -EOPNOTSUPP; - return device->set_vf_guid(device, vf, port, guid, type); + return device->ops.set_vf_guid(device, vf, port, guid, type); } EXPORT_SYMBOL(ib_set_vf_guid); @@ -2361,12 +2364,12 @@ EXPORT_SYMBOL(ib_set_vf_guid); int ib_map_mr_sg(struct ib_mr *mr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset, unsigned int page_size) { - if (unlikely(!mr->device->map_mr_sg)) + if (unlikely(!mr->device->ops.map_mr_sg)) return -EOPNOTSUPP; mr->page_size = page_size; - return mr->device->map_mr_sg(mr, sg, sg_nents, sg_offset); + return mr->device->ops.map_mr_sg(mr, sg, sg_nents, sg_offset); } EXPORT_SYMBOL(ib_map_mr_sg); @@ -2565,8 +2568,8 @@ static void __ib_drain_rq(struct ib_qp *qp) */ void ib_drain_sq(struct ib_qp *qp) { - if (qp->device->drain_sq) - qp->device->drain_sq(qp); + if (qp->device->ops.drain_sq) + qp->device->ops.drain_sq(qp); else __ib_drain_sq(qp); } @@ -2593,8 +2596,8 @@ EXPORT_SYMBOL(ib_drain_sq); */ void ib_drain_rq(struct ib_qp *qp) { - if (qp->device->drain_rq) - qp->device->drain_rq(qp); + if (qp->device->ops.drain_rq) + qp->device->ops.drain_rq(qp); else __ib_drain_rq(qp); } diff --git a/drivers/infiniband/hw/i40iw/i40iw_cm.c b/drivers/infiniband/hw/i40iw/i40iw_cm.c index 771eb6bd0785..ef137c40205c 100644 --- a/drivers/infiniband/hw/i40iw/i40iw_cm.c +++ b/drivers/infiniband/hw/i40iw/i40iw_cm.c @@ -3478,7 +3478,7 @@ static void i40iw_qp_disconnect(struct i40iw_qp *iwqp) /* Need to free the Last Streaming Mode Message */ if (iwqp->ietf_mem.va) { if (iwqp->lsmm_mr) - iwibdev->ibdev.dereg_mr(iwqp->lsmm_mr); + iwibdev->ibdev.ops.dereg_mr(iwqp->lsmm_mr); i40iw_free_dma_mem(iwdev->sc_dev.hw, &iwqp->ietf_mem); } } diff --git a/drivers/infiniband/hw/mlx4/alias_GUID.c b/drivers/infiniband/hw/mlx4/alias_GUID.c index 155b4dfc0ae8..e44d817d7d87 100644 --- a/drivers/infiniband/hw/mlx4/alias_GUID.c +++ b/drivers/infiniband/hw/mlx4/alias_GUID.c @@ -849,7 +849,7 @@ int mlx4_ib_init_alias_guid_service(struct mlx4_ib_dev *dev) spin_lock_init(&dev->sriov.alias_guid.ag_work_lock); for (i = 1; i <= dev->num_ports; ++i) { - if (dev->ib_dev.query_gid(&dev->ib_dev , i, 0, &gid)) { + if (dev->ib_dev.ops.query_gid(&dev->ib_dev , i, 0, &gid)) { ret = -EFAULT; goto err_unregister; } diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index c8b4b514638a..9c9e40244387 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -146,7 +146,7 @@ static int get_port_state(struct ib_device *ibdev, int ret; memset(&attr, 0, sizeof(attr)); - ret = ibdev->query_port(ibdev, port_num, &attr); + ret = ibdev->ops.query_port(ibdev, port_num, &attr); if (!ret) *state = attr.state; return ret; diff --git a/drivers/infiniband/hw/nes/nes_cm.c b/drivers/infiniband/hw/nes/nes_cm.c index 2b67ace5b614..032883180f65 100644 --- a/drivers/infiniband/hw/nes/nes_cm.c +++ b/drivers/infiniband/hw/nes/nes_cm.c @@ -3033,7 +3033,7 @@ static int nes_disconnect(struct nes_qp *nesqp, int abrupt) /* Need to free the Last Streaming Mode Message */ if (nesqp->ietf_frame) { if (nesqp->lsmm_mr) - nesibdev->ibdev.dereg_mr(nesqp->lsmm_mr); + nesibdev->ibdev.ops.dereg_mr(nesqp->lsmm_mr); pci_free_consistent(nesdev->pcidev, nesqp->private_data_len + nesqp->ietf_frame_size, nesqp->ietf_frame, nesqp->ietf_frame_pbase); diff --git a/drivers/infiniband/sw/rdmavt/vt.c b/drivers/infiniband/sw/rdmavt/vt.c index cf7aab970cb4..158b6ab77a17 100644 --- a/drivers/infiniband/sw/rdmavt/vt.c +++ b/drivers/infiniband/sw/rdmavt/vt.c @@ -469,31 +469,31 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) * rdmavt does not support modify device currently drivers must * provide. */ - if (!rdi->ibdev.modify_device) + if (!rdi->ibdev.ops.modify_device) return -EOPNOTSUPP; break; case QUERY_PORT: - if (!rdi->ibdev.query_port) + if (!rdi->ibdev.ops.query_port) if (!rdi->driver_f.query_port_state) return -EINVAL; break; case MODIFY_PORT: - if (!rdi->ibdev.modify_port) + if (!rdi->ibdev.ops.modify_port) if (!rdi->driver_f.cap_mask_chg || !rdi->driver_f.shut_down_port) return -EINVAL; break; case QUERY_GID: - if (!rdi->ibdev.query_gid) + if (!rdi->ibdev.ops.query_gid) if (!rdi->driver_f.get_guid_be) return -EINVAL; break; case CREATE_QP: - if (!rdi->ibdev.create_qp) + if (!rdi->ibdev.ops.create_qp) if (!rdi->driver_f.qp_priv_alloc || !rdi->driver_f.qp_priv_free || !rdi->driver_f.notify_qp_reset || @@ -504,7 +504,7 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) break; case MODIFY_QP: - if (!rdi->ibdev.modify_qp) + if (!rdi->ibdev.ops.modify_qp) if (!rdi->driver_f.notify_qp_reset || !rdi->driver_f.schedule_send || !rdi->driver_f.get_pmtu_from_attr || @@ -518,7 +518,7 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) break; case DESTROY_QP: - if (!rdi->ibdev.destroy_qp) + if (!rdi->ibdev.ops.destroy_qp) if (!rdi->driver_f.qp_priv_free || !rdi->driver_f.notify_qp_reset || !rdi->driver_f.flush_qp_waiters || @@ -528,7 +528,7 @@ static noinline int check_support(struct rvt_dev_info *rdi, int verb) break; case POST_SEND: - if (!rdi->ibdev.post_send) + if (!rdi->ibdev.ops.post_send) if (!rdi->driver_f.schedule_send || !rdi->driver_f.do_send || !rdi->post_parms) diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c index 1df90f0d9e64..c4c07ab42c62 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_main.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c @@ -2149,16 +2149,16 @@ static struct net_device *ipoib_get_netdev(struct ib_device *hca, u8 port, { struct net_device *dev; - if (hca->alloc_rdma_netdev) { - dev = hca->alloc_rdma_netdev(hca, port, - RDMA_NETDEV_IPOIB, name, - NET_NAME_UNKNOWN, - ipoib_setup_common); + if (hca->ops.alloc_rdma_netdev) { + dev = hca->ops.alloc_rdma_netdev(hca, port, + RDMA_NETDEV_IPOIB, name, + NET_NAME_UNKNOWN, + ipoib_setup_common); if (IS_ERR_OR_NULL(dev) && PTR_ERR(dev) != -EOPNOTSUPP) return NULL; } - if (!hca->alloc_rdma_netdev || PTR_ERR(dev) == -EOPNOTSUPP) + if (!hca->ops.alloc_rdma_netdev || PTR_ERR(dev) == -EOPNOTSUPP) dev = ipoib_create_netdev_default(hca, name, NET_NAME_UNKNOWN, ipoib_setup_common); diff --git a/drivers/infiniband/ulp/iser/iser_memory.c b/drivers/infiniband/ulp/iser/iser_memory.c index 009be8889d71..86669bb06572 100644 --- a/drivers/infiniband/ulp/iser/iser_memory.c +++ b/drivers/infiniband/ulp/iser/iser_memory.c @@ -77,8 +77,8 @@ int iser_assign_reg_ops(struct iser_device *device) struct ib_device *ib_dev = device->ib_device; /* Assign function handles - based on FMR support */ - if (ib_dev->alloc_fmr && ib_dev->dealloc_fmr && - ib_dev->map_phys_fmr && ib_dev->unmap_fmr) { + if (ib_dev->ops.alloc_fmr && ib_dev->ops.dealloc_fmr && + ib_dev->ops.map_phys_fmr && ib_dev->ops.unmap_fmr) { iser_info("FMR supported, using FMR for registration\n"); device->reg_ops = &fmr_ops; } else if (ib_dev->attrs.device_cap_flags & IB_DEVICE_MEM_MGT_EXTENSIONS) { diff --git a/drivers/infiniband/ulp/opa_vnic/opa_vnic_netdev.c b/drivers/infiniband/ulp/opa_vnic/opa_vnic_netdev.c index 61558788b3fa..ae70cd18903e 100644 --- a/drivers/infiniband/ulp/opa_vnic/opa_vnic_netdev.c +++ b/drivers/infiniband/ulp/opa_vnic/opa_vnic_netdev.c @@ -330,10 +330,10 @@ struct opa_vnic_adapter *opa_vnic_add_netdev(struct ib_device *ibdev, struct rdma_netdev *rn; int rc; - netdev = ibdev->alloc_rdma_netdev(ibdev, port_num, - RDMA_NETDEV_OPA_VNIC, - "veth%d", NET_NAME_UNKNOWN, - ether_setup); + netdev = ibdev->ops.alloc_rdma_netdev(ibdev, port_num, + RDMA_NETDEV_OPA_VNIC, + "veth%d", NET_NAME_UNKNOWN, + ether_setup); if (!netdev) return ERR_PTR(-ENOMEM); else if (IS_ERR(netdev)) diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index eed0eb3bb04c..e58146d020bc 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -4063,8 +4063,10 @@ static void srp_add_one(struct ib_device *device) srp_dev->max_pages_per_mr = min_t(u64, SRP_MAX_PAGES_PER_MR, max_pages_per_mr); - srp_dev->has_fmr = (device->alloc_fmr && device->dealloc_fmr && - device->map_phys_fmr && device->unmap_fmr); + srp_dev->has_fmr = (device->ops.alloc_fmr && + device->ops.dealloc_fmr && + device->ops.map_phys_fmr && + device->ops.unmap_fmr); srp_dev->has_fr = (attr->device_cap_flags & IB_DEVICE_MEM_MGT_EXTENSIONS); if (!never_register && !srp_dev->has_fmr && !srp_dev->has_fr) { diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c index 5fdb9a509a97..fce96f78e364 100644 --- a/fs/cifs/smbdirect.c +++ b/fs/cifs/smbdirect.c @@ -1724,7 +1724,7 @@ static struct smbd_connection *_smbd_get_connection( info->responder_resources); /* Need to send IRD/ORD in private data for iWARP */ - info->id->device->get_port_immutable( + info->id->device->ops.get_port_immutable( info->id->device, info->id->port_num, &port_immutable); if (port_immutable.core_cap_flags & RDMA_CORE_PORT_IWARP) { ird_ord_hdr[0] = info->responder_resources; diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 29f0bd55a592..b77e0cc04f02 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -2506,7 +2506,7 @@ struct ib_device_ops { struct ib_device { /* Do not access @dma_device directly from ULP nor from HW drivers. */ struct device *dma_device; - + struct ib_device_ops ops; char name[IB_DEVICE_NAME_MAX]; struct list_head event_handler_list; @@ -2531,269 +2531,6 @@ struct ib_device { struct iw_cm_verbs *iwcm; - /** - * alloc_hw_stats - Allocate a struct rdma_hw_stats and fill in the - * driver initialized data. The struct is kfree()'ed by the sysfs - * core when the device is removed. A lifespan of -1 in the return - * struct tells the core to set a default lifespan. - */ - struct rdma_hw_stats *(*alloc_hw_stats)(struct ib_device *device, - u8 port_num); - /** - * get_hw_stats - Fill in the counter value(s) in the stats struct. - * @index - The index in the value array we wish to have updated, or - * num_counters if we want all stats updated - * Return codes - - * < 0 - Error, no counters updated - * index - Updated the single counter pointed to by index - * num_counters - Updated all counters (will reset the timestamp - * and prevent further calls for lifespan milliseconds) - * Drivers are allowed to update all counters in leiu of just the - * one given in index at their option - */ - int (*get_hw_stats)(struct ib_device *device, - struct rdma_hw_stats *stats, - u8 port, int index); - int (*query_device)(struct ib_device *device, - struct ib_device_attr *device_attr, - struct ib_udata *udata); - int (*query_port)(struct ib_device *device, - u8 port_num, - struct ib_port_attr *port_attr); - enum rdma_link_layer (*get_link_layer)(struct ib_device *device, - u8 port_num); - /* When calling get_netdev, the HW vendor's driver should return the - * net device of device @device at port @port_num or NULL if such - * a net device doesn't exist. The vendor driver should call dev_hold - * on this net device. The HW vendor's device driver must guarantee - * that this function returns NULL before the net device has finished - * NETDEV_UNREGISTER state. - */ - struct net_device *(*get_netdev)(struct ib_device *device, - u8 port_num); - /* query_gid should be return GID value for @device, when @port_num - * link layer is either IB or iWarp. It is no-op if @port_num port - * is RoCE link layer. - */ - int (*query_gid)(struct ib_device *device, - u8 port_num, int index, - union ib_gid *gid); - /* When calling add_gid, the HW vendor's driver should add the gid - * of device of port at gid index available at @attr. Meta-info of - * that gid (for example, the network device related to this gid) is - * available at @attr. @context allows the HW vendor driver to store - * extra information together with a GID entry. The HW vendor driver may - * allocate memory to contain this information and store it in @context - * when a new GID entry is written to. Params are consistent until the - * next call of add_gid or delete_gid. The function should return 0 on - * success or error otherwise. The function could be called - * concurrently for different ports. This function is only called when - * roce_gid_table is used. - */ - int (*add_gid)(const struct ib_gid_attr *attr, - void **context); - /* When calling del_gid, the HW vendor's driver should delete the - * gid of device @device at gid index gid_index of port port_num - * available in @attr. - * Upon the deletion of a GID entry, the HW vendor must free any - * allocated memory. The caller will clear @context afterwards. - * This function is only called when roce_gid_table is used. - */ - int (*del_gid)(const struct ib_gid_attr *attr, - void **context); - int (*query_pkey)(struct ib_device *device, - u8 port_num, u16 index, u16 *pkey); - int (*modify_device)(struct ib_device *device, - int device_modify_mask, - struct ib_device_modify *device_modify); - int (*modify_port)(struct ib_device *device, - u8 port_num, int port_modify_mask, - struct ib_port_modify *port_modify); - struct ib_ucontext * (*alloc_ucontext)(struct ib_device *device, - struct ib_udata *udata); - int (*dealloc_ucontext)(struct ib_ucontext *context); - int (*mmap)(struct ib_ucontext *context, - struct vm_area_struct *vma); - struct ib_pd * (*alloc_pd)(struct ib_device *device, - struct ib_ucontext *context, - struct ib_udata *udata); - int (*dealloc_pd)(struct ib_pd *pd); - struct ib_ah * (*create_ah)(struct ib_pd *pd, - struct rdma_ah_attr *ah_attr, - struct ib_udata *udata); - int (*modify_ah)(struct ib_ah *ah, - struct rdma_ah_attr *ah_attr); - int (*query_ah)(struct ib_ah *ah, - struct rdma_ah_attr *ah_attr); - int (*destroy_ah)(struct ib_ah *ah); - struct ib_srq * (*create_srq)(struct ib_pd *pd, - struct ib_srq_init_attr *srq_init_attr, - struct ib_udata *udata); - int (*modify_srq)(struct ib_srq *srq, - struct ib_srq_attr *srq_attr, - enum ib_srq_attr_mask srq_attr_mask, - struct ib_udata *udata); - int (*query_srq)(struct ib_srq *srq, - struct ib_srq_attr *srq_attr); - int (*destroy_srq)(struct ib_srq *srq); - int (*post_srq_recv)(struct ib_srq *srq, - const struct ib_recv_wr *recv_wr, - const struct ib_recv_wr **bad_recv_wr); - struct ib_qp * (*create_qp)(struct ib_pd *pd, - struct ib_qp_init_attr *qp_init_attr, - struct ib_udata *udata); - int (*modify_qp)(struct ib_qp *qp, - struct ib_qp_attr *qp_attr, - int qp_attr_mask, - struct ib_udata *udata); - int (*query_qp)(struct ib_qp *qp, - struct ib_qp_attr *qp_attr, - int qp_attr_mask, - struct ib_qp_init_attr *qp_init_attr); - int (*destroy_qp)(struct ib_qp *qp); - int (*post_send)(struct ib_qp *qp, - const struct ib_send_wr *send_wr, - const struct ib_send_wr **bad_send_wr); - int (*post_recv)(struct ib_qp *qp, - const struct ib_recv_wr *recv_wr, - const struct ib_recv_wr **bad_recv_wr); - struct ib_cq * (*create_cq)(struct ib_device *device, - const struct ib_cq_init_attr *attr, - struct ib_ucontext *context, - struct ib_udata *udata); - int (*modify_cq)(struct ib_cq *cq, u16 cq_count, - u16 cq_period); - int (*destroy_cq)(struct ib_cq *cq); - int (*resize_cq)(struct ib_cq *cq, int cqe, - struct ib_udata *udata); - int (*poll_cq)(struct ib_cq *cq, int num_entries, - struct ib_wc *wc); - int (*peek_cq)(struct ib_cq *cq, int wc_cnt); - int (*req_notify_cq)(struct ib_cq *cq, - enum ib_cq_notify_flags flags); - int (*req_ncomp_notif)(struct ib_cq *cq, - int wc_cnt); - struct ib_mr * (*get_dma_mr)(struct ib_pd *pd, - int mr_access_flags); - struct ib_mr * (*reg_user_mr)(struct ib_pd *pd, - u64 start, u64 length, - u64 virt_addr, - int mr_access_flags, - struct ib_udata *udata); - int (*rereg_user_mr)(struct ib_mr *mr, - int flags, - u64 start, u64 length, - u64 virt_addr, - int mr_access_flags, - struct ib_pd *pd, - struct ib_udata *udata); - int (*dereg_mr)(struct ib_mr *mr); - struct ib_mr * (*alloc_mr)(struct ib_pd *pd, - enum ib_mr_type mr_type, - u32 max_num_sg); - int (*map_mr_sg)(struct ib_mr *mr, - struct scatterlist *sg, - int sg_nents, - unsigned int *sg_offset); - struct ib_mw * (*alloc_mw)(struct ib_pd *pd, - enum ib_mw_type type, - struct ib_udata *udata); - int (*dealloc_mw)(struct ib_mw *mw); - struct ib_fmr * (*alloc_fmr)(struct ib_pd *pd, - int mr_access_flags, - struct ib_fmr_attr *fmr_attr); - int (*map_phys_fmr)(struct ib_fmr *fmr, - u64 *page_list, int list_len, - u64 iova); - int (*unmap_fmr)(struct list_head *fmr_list); - int (*dealloc_fmr)(struct ib_fmr *fmr); - int (*attach_mcast)(struct ib_qp *qp, - union ib_gid *gid, - u16 lid); - int (*detach_mcast)(struct ib_qp *qp, - union ib_gid *gid, - u16 lid); - int (*process_mad)(struct ib_device *device, - int process_mad_flags, - u8 port_num, - const struct ib_wc *in_wc, - const struct ib_grh *in_grh, - const struct ib_mad_hdr *in_mad, - size_t in_mad_size, - struct ib_mad_hdr *out_mad, - size_t *out_mad_size, - u16 *out_mad_pkey_index); - struct ib_xrcd * (*alloc_xrcd)(struct ib_device *device, - struct ib_ucontext *ucontext, - struct ib_udata *udata); - int (*dealloc_xrcd)(struct ib_xrcd *xrcd); - struct ib_flow * (*create_flow)(struct ib_qp *qp, - struct ib_flow_attr - *flow_attr, - int domain, - struct ib_udata *udata); - int (*destroy_flow)(struct ib_flow *flow_id); - int (*check_mr_status)(struct ib_mr *mr, u32 check_mask, - struct ib_mr_status *mr_status); - void (*disassociate_ucontext)(struct ib_ucontext *ibcontext); - void (*drain_rq)(struct ib_qp *qp); - void (*drain_sq)(struct ib_qp *qp); - int (*set_vf_link_state)(struct ib_device *device, int vf, u8 port, - int state); - int (*get_vf_config)(struct ib_device *device, int vf, u8 port, - struct ifla_vf_info *ivf); - int (*get_vf_stats)(struct ib_device *device, int vf, u8 port, - struct ifla_vf_stats *stats); - int (*set_vf_guid)(struct ib_device *device, int vf, u8 port, u64 guid, - int type); - struct ib_wq * (*create_wq)(struct ib_pd *pd, - struct ib_wq_init_attr *init_attr, - struct ib_udata *udata); - int (*destroy_wq)(struct ib_wq *wq); - int (*modify_wq)(struct ib_wq *wq, - struct ib_wq_attr *attr, - u32 wq_attr_mask, - struct ib_udata *udata); - struct ib_rwq_ind_table * (*create_rwq_ind_table)(struct ib_device *device, - struct ib_rwq_ind_table_init_attr *init_attr, - struct ib_udata *udata); - int (*destroy_rwq_ind_table)(struct ib_rwq_ind_table *wq_ind_table); - struct ib_flow_action * (*create_flow_action_esp)(struct ib_device *device, - const struct ib_flow_action_attrs_esp *attr, - struct uverbs_attr_bundle *attrs); - int (*destroy_flow_action)(struct ib_flow_action *action); - int (*modify_flow_action_esp)(struct ib_flow_action *action, - const struct ib_flow_action_attrs_esp *attr, - struct uverbs_attr_bundle *attrs); - struct ib_dm * (*alloc_dm)(struct ib_device *device, - struct ib_ucontext *context, - struct ib_dm_alloc_attr *attr, - struct uverbs_attr_bundle *attrs); - int (*dealloc_dm)(struct ib_dm *dm); - struct ib_mr * (*reg_dm_mr)(struct ib_pd *pd, struct ib_dm *dm, - struct ib_dm_mr_attr *attr, - struct uverbs_attr_bundle *attrs); - struct ib_counters * (*create_counters)(struct ib_device *device, - struct uverbs_attr_bundle *attrs); - int (*destroy_counters)(struct ib_counters *counters); - int (*read_counters)(struct ib_counters *counters, - struct ib_counters_read_attr *counters_read_attr, - struct uverbs_attr_bundle *attrs); - - /** - * rdma netdev operation - * - * Driver implementing alloc_rdma_netdev must return -EOPNOTSUPP if it - * doesn't support the specified rdma netdev type. - */ - struct net_device *(*alloc_rdma_netdev)( - struct ib_device *device, - u8 port_num, - enum rdma_netdev_t type, - const char *name, - unsigned char name_assign_type, - void (*setup)(struct net_device *)); - struct module *owner; struct device dev; /* First group for device attributes, @@ -2835,17 +2572,6 @@ struct ib_device { */ struct rdma_restrack_root res; - /** - * The following mandatory functions are used only at device - * registration. Keep functions such as these at the end of this - * structure to avoid cache line misses when accessing struct ib_device - * in fast paths. - */ - int (*get_port_immutable)(struct ib_device *, u8, struct ib_port_immutable *); - void (*get_dev_fw_str)(struct ib_device *, char *str); - const struct cpumask *(*get_vector_affinity)(struct ib_device *ibdev, - int comp_vector); - const struct uverbs_object_tree_def *const *driver_specs; enum rdma_driver_id driver_id; }; @@ -3354,7 +3080,7 @@ static inline bool rdma_cap_roce_gid_table(const struct ib_device *device, u8 port_num) { return rdma_protocol_roce(device, port_num) && - device->add_gid && device->del_gid; + device->ops.add_gid && device->ops.del_gid; } /* @@ -3578,7 +3304,8 @@ static inline int ib_post_srq_recv(struct ib_srq *srq, { const struct ib_recv_wr *dummy; - return srq->device->post_srq_recv(srq, recv_wr, bad_recv_wr ? : &dummy); + return srq->device->ops.post_srq_recv(srq, recv_wr, + bad_recv_wr ? : &dummy); } /** @@ -3681,7 +3408,7 @@ static inline int ib_post_send(struct ib_qp *qp, { const struct ib_send_wr *dummy; - return qp->device->post_send(qp, send_wr, bad_send_wr ? : &dummy); + return qp->device->ops.post_send(qp, send_wr, bad_send_wr ? : &dummy); } /** @@ -3698,7 +3425,7 @@ static inline int ib_post_recv(struct ib_qp *qp, { const struct ib_recv_wr *dummy; - return qp->device->post_recv(qp, recv_wr, bad_recv_wr ? : &dummy); + return qp->device->ops.post_recv(qp, recv_wr, bad_recv_wr ? : &dummy); } struct ib_cq *__ib_alloc_cq(struct ib_device *dev, void *private, @@ -3771,7 +3498,7 @@ int ib_destroy_cq(struct ib_cq *cq); static inline int ib_poll_cq(struct ib_cq *cq, int num_entries, struct ib_wc *wc) { - return cq->device->poll_cq(cq, num_entries, wc); + return cq->device->ops.poll_cq(cq, num_entries, wc); } /** @@ -3804,7 +3531,7 @@ static inline int ib_poll_cq(struct ib_cq *cq, int num_entries, static inline int ib_req_notify_cq(struct ib_cq *cq, enum ib_cq_notify_flags flags) { - return cq->device->req_notify_cq(cq, flags); + return cq->device->ops.req_notify_cq(cq, flags); } /** @@ -3816,8 +3543,8 @@ static inline int ib_req_notify_cq(struct ib_cq *cq, */ static inline int ib_req_ncomp_notif(struct ib_cq *cq, int wc_cnt) { - return cq->device->req_ncomp_notif ? - cq->device->req_ncomp_notif(cq, wc_cnt) : + return cq->device->ops.req_ncomp_notif ? + cq->device->ops.req_ncomp_notif(cq, wc_cnt) : -ENOSYS; } @@ -4081,7 +3808,7 @@ static inline int ib_map_phys_fmr(struct ib_fmr *fmr, u64 *page_list, int list_len, u64 iova) { - return fmr->device->map_phys_fmr(fmr, page_list, list_len, iova); + return fmr->device->ops.map_phys_fmr(fmr, page_list, list_len, iova); } /** @@ -4434,10 +4161,10 @@ static inline const struct cpumask * ib_get_vector_affinity(struct ib_device *device, int comp_vector) { if (comp_vector < 0 || comp_vector >= device->num_comp_vectors || - !device->get_vector_affinity) + !device->ops.get_vector_affinity) return NULL; - return device->get_vector_affinity(device, comp_vector); + return device->ops.get_vector_affinity(device, comp_vector); } diff --git a/net/rds/ib.c b/net/rds/ib.c index c1d97640c0be..33ae2f5cc613 100644 --- a/net/rds/ib.c +++ b/net/rds/ib.c @@ -148,8 +148,8 @@ static void rds_ib_add_one(struct ib_device *device) has_fr = (device->attrs.device_cap_flags & IB_DEVICE_MEM_MGT_EXTENSIONS); - has_fmr = (device->alloc_fmr && device->dealloc_fmr && - device->map_phys_fmr && device->unmap_fmr); + has_fmr = (device->ops.alloc_fmr && device->ops.dealloc_fmr && + device->ops.map_phys_fmr && device->ops.unmap_fmr); rds_ibdev->use_fastreg = (has_fr && !has_fmr); rds_ibdev->fmr_max_remaps = device->attrs.max_map_per_fmr?: 32; diff --git a/net/sunrpc/xprtrdma/fmr_ops.c b/net/sunrpc/xprtrdma/fmr_ops.c index 0f7c465d9a5a..ba1bd4bdc29f 100644 --- a/net/sunrpc/xprtrdma/fmr_ops.c +++ b/net/sunrpc/xprtrdma/fmr_ops.c @@ -41,7 +41,7 @@ enum { bool fmr_is_supported(struct rpcrdma_ia *ia) { - if (!ia->ri_device->alloc_fmr) { + if (!ia->ri_device->ops.alloc_fmr) { pr_info("rpcrdma: 'fmr' mode is not supported by device %s\n", ia->ri_device->name); return false;