From patchwork Mon Nov 5 11:35:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10667881 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4F41B1751 for ; Mon, 5 Nov 2018 11:35:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3CF9A28DB6 for ; Mon, 5 Nov 2018 11:35:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2DD3A29782; Mon, 5 Nov 2018 11:35:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 69CAE28DB6 for ; Mon, 5 Nov 2018 11:35:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726706AbeKEUzO (ORCPT ); Mon, 5 Nov 2018 15:55:14 -0500 Received: from mail-wm1-f68.google.com ([209.85.128.68]:33435 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727254AbeKEUzO (ORCPT ); Mon, 5 Nov 2018 15:55:14 -0500 Received: by mail-wm1-f68.google.com with SMTP id f19-v6so6182902wmb.0 for ; Mon, 05 Nov 2018 03:35:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=jB/EtDpIxhWn/khYBPERNKYwL0D8LhM0hEtc2iikCQE=; b=fPncJL4TXQQf7fHNyHL8fjbtK1Pq5FsOIl0Xpmy5J6UPNa4y/PBR5SrVbyVvFfRCrp zIpTd88KQxBaqcQdFMW0ZlXWfoT0ENPpyR1wfa6KlDYM3sAtv3MMXeTlL70YwWEiRiLi Kkli+lkL+oz3RMWpHuJdew52vtHvjTndVJ9ku78Dmb7yjzhUr6CP4rdMfhv6T/WKVBlW OaFAJ4k9chI0COOM4m2DZzbLL7iI/Pq+nS8dZMP8rC3pFv6j4l/mioPiIDxczimSiakg EYBnuiph6krQVtCk6BLMq0HCJsj5b7qAiJFTHBSFLaiXfMqZi17bu+5BDli03lXVrcF2 QOSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jB/EtDpIxhWn/khYBPERNKYwL0D8LhM0hEtc2iikCQE=; b=oX6WRZyf84QHCFBbwyodMTnl8UoLJ1CSFFknQmE5y+VyhDabvXOEu6kQSbjwhHYNjx 9Maw3LKkJD9x2xFRpQuu1IL5qg1FYj1CikpAbhTF4IuZc/iuiMHJuRhOcAKMjpLvMb2d H/66xqa0Vumfp2k+e9lfCRHKLDGX3v6VoeCcTV56AFNA0SSycomfiFgoACSUoLN/Amek 36yYPvYZxd1s38u5HbSzJ64M/jGmzhLrzlqCw2HWwGLY1GA1sT9yi0wwa8P7iGT78Wxh NaPlS+bXEeg7YBp9w/UjrotC0GwMoJmL97hDRuW9GWJ7vkvIvKW0T+niKyiOve3L2KL0 NCAw== X-Gm-Message-State: AGRZ1gLJWlnpUSKxhngJ8WN6cAkz+9R3Af1vwlNi6evrMspzxGxmNl9q WULHvROfIFwcICj8stcznZE= X-Google-Smtp-Source: AJdET5e7kR4Budd/Jicm/LoZITHcyjV+dKBfSGCBosDy52B+VFyI5um+srSpN3+CVraIhRcSshBezw== X-Received: by 2002:a1c:128c:: with SMTP id 134-v6mr4368004wms.72.1541417754564; Mon, 05 Nov 2018 03:35:54 -0800 (PST) Received: from kheib-workstation.redhat.com ([192.116.94.216]) by smtp.gmail.com with ESMTPSA id z18-v6sm10762147wru.83.2018.11.05.03.35.53 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Nov 2018 03:35:54 -0800 (PST) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next v3 08/20] RDMA/mlx4: Initialize ib_device_ops struct Date: Mon, 5 Nov 2018 13:35:16 +0200 Message-Id: <20181105113528.8317-9-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181105113528.8317-1-kamalheib1@gmail.com> References: <20181105113528.8317-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations using ib_set_device_ops(). Signed-off-by: Kamal Heib --- drivers/infiniband/hw/mlx4/main.c | 191 ++++++++++++++++++++++---------------- 1 file changed, 112 insertions(+), 79 deletions(-) diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c index 0def2323459c..b74a238374fb 100644 --- a/drivers/infiniband/hw/mlx4/main.c +++ b/drivers/infiniband/hw/mlx4/main.c @@ -2220,6 +2220,11 @@ static void mlx4_ib_fill_diag_counters(struct mlx4_ib_dev *ibdev, } } +static const struct ib_device_ops mlx4_ib_hw_stats_ops = { + .get_hw_stats = mlx4_ib_get_hw_stats, + .alloc_hw_stats = mlx4_ib_alloc_hw_stats, +}; + static int mlx4_ib_alloc_diag_counters(struct mlx4_ib_dev *ibdev) { struct mlx4_ib_diag_counters *diag = ibdev->diag_counters; @@ -2246,8 +2251,7 @@ static int mlx4_ib_alloc_diag_counters(struct mlx4_ib_dev *ibdev) diag[i].offset, i); } - ibdev->ib_dev.get_hw_stats = mlx4_ib_get_hw_stats; - ibdev->ib_dev.alloc_hw_stats = mlx4_ib_alloc_hw_stats; + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_hw_stats_ops); return 0; @@ -2499,6 +2503,101 @@ static void get_fw_ver_str(struct ib_device *device, char *str) (int) dev->dev->caps.fw_ver & 0xffff); } +static const struct ib_device_ops mlx4_ib_dev_ops = { + /* Device operations */ + .query_device = mlx4_ib_query_device, + .modify_device = mlx4_ib_modify_device, + .get_dev_fw_str = get_fw_ver_str, + /* Port operations */ + .get_netdev = mlx4_ib_get_netdev, + .query_port = mlx4_ib_query_port, + .get_link_layer = mlx4_ib_port_link_layer, + .modify_port = mlx4_ib_modify_port, + .get_port_immutable = mlx4_port_immutable, + /* GID operations */ + .add_gid = mlx4_ib_add_gid, + .del_gid = mlx4_ib_del_gid, + .query_gid = mlx4_ib_query_gid, + /* PKey operations */ + .query_pkey = mlx4_ib_query_pkey, + /* Ucontext operations */ + .alloc_ucontext = mlx4_ib_alloc_ucontext, + .dealloc_ucontext = mlx4_ib_dealloc_ucontext, + .mmap = mlx4_ib_mmap, + .disassociate_ucontext = mlx4_ib_disassociate_ucontext, + /* PD operations */ + .alloc_pd = mlx4_ib_alloc_pd, + .dealloc_pd = mlx4_ib_dealloc_pd, + /* AH operations */ + .create_ah = mlx4_ib_create_ah, + .query_ah = mlx4_ib_query_ah, + .destroy_ah = mlx4_ib_destroy_ah, + /* SRQ operations */ + .create_srq = mlx4_ib_create_srq, + .modify_srq = mlx4_ib_modify_srq, + .query_srq = mlx4_ib_query_srq, + .destroy_srq = mlx4_ib_destroy_srq, + .post_srq_recv = mlx4_ib_post_srq_recv, + /* QP operations */ + .create_qp = mlx4_ib_create_qp, + .modify_qp = mlx4_ib_modify_qp, + .query_qp = mlx4_ib_query_qp, + .destroy_qp = mlx4_ib_destroy_qp, + .drain_sq = mlx4_ib_drain_sq, + .drain_rq = mlx4_ib_drain_rq, + .post_send = mlx4_ib_post_send, + .post_recv = mlx4_ib_post_recv, + /* CQ operations */ + .create_cq = mlx4_ib_create_cq, + .modify_cq = mlx4_ib_modify_cq, + .resize_cq = mlx4_ib_resize_cq, + .destroy_cq = mlx4_ib_destroy_cq, + .poll_cq = mlx4_ib_poll_cq, + .req_notify_cq = mlx4_ib_arm_cq, + /* MR operations */ + .get_dma_mr = mlx4_ib_get_dma_mr, + .reg_user_mr = mlx4_ib_reg_user_mr, + .rereg_user_mr = mlx4_ib_rereg_user_mr, + .dereg_mr = mlx4_ib_dereg_mr, + .alloc_mr = mlx4_ib_alloc_mr, + .map_mr_sg = mlx4_ib_map_mr_sg, + /* Multicast operations */ + .attach_mcast = mlx4_ib_mcg_attach, + .detach_mcast = mlx4_ib_mcg_detach, + /* MAD operations */ + .process_mad = mlx4_ib_process_mad, +}; + +static const struct ib_device_ops mlx4_ib_dev_wq_ops = { + .create_wq = mlx4_ib_create_wq, + .modify_wq = mlx4_ib_modify_wq, + .destroy_wq = mlx4_ib_destroy_wq, + .create_rwq_ind_table = mlx4_ib_create_rwq_ind_table, + .destroy_rwq_ind_table = mlx4_ib_destroy_rwq_ind_table, +}; + +static const struct ib_device_ops mlx4_ib_dev_fmr_ops = { + .alloc_fmr = mlx4_ib_fmr_alloc, + .map_phys_fmr = mlx4_ib_map_phys_fmr, + .unmap_fmr = mlx4_ib_unmap_fmr, + .dealloc_fmr = mlx4_ib_fmr_dealloc, +}; + +static const struct ib_device_ops mlx4_ib_dev_mw_ops = { + .alloc_mw = mlx4_ib_alloc_mw, + .dealloc_mw = mlx4_ib_dealloc_mw, +}; + +static const struct ib_device_ops mlx4_ib_dev_xrc_ops = { + .alloc_xrcd = mlx4_ib_alloc_xrcd, + .dealloc_xrcd = mlx4_ib_dealloc_xrcd, +}; + +static const struct ib_device_ops mlx4_ib_dev_fs_ops = { + .create_flow = mlx4_ib_create_flow, + .destroy_flow = mlx4_ib_destroy_flow, +}; + static void *mlx4_ib_add(struct mlx4_dev *dev) { struct mlx4_ib_dev *ibdev; @@ -2554,9 +2653,6 @@ static void *mlx4_ib_add(struct mlx4_dev *dev) 1 : ibdev->num_ports; ibdev->ib_dev.num_comp_vectors = dev->caps.num_comp_vectors; ibdev->ib_dev.dev.parent = &dev->persist->pdev->dev; - ibdev->ib_dev.get_netdev = mlx4_ib_get_netdev; - ibdev->ib_dev.add_gid = mlx4_ib_add_gid; - ibdev->ib_dev.del_gid = mlx4_ib_del_gid; if (dev->caps.userspace_caps) ibdev->ib_dev.uverbs_abi_ver = MLX4_IB_UVERBS_ABI_VERSION; @@ -2589,116 +2685,53 @@ static void *mlx4_ib_add(struct mlx4_dev *dev) (1ull << IB_USER_VERBS_CMD_CREATE_XSRQ) | (1ull << IB_USER_VERBS_CMD_OPEN_QP); - ibdev->ib_dev.query_device = mlx4_ib_query_device; - ibdev->ib_dev.query_port = mlx4_ib_query_port; - ibdev->ib_dev.get_link_layer = mlx4_ib_port_link_layer; - ibdev->ib_dev.query_gid = mlx4_ib_query_gid; - ibdev->ib_dev.query_pkey = mlx4_ib_query_pkey; - ibdev->ib_dev.modify_device = mlx4_ib_modify_device; - ibdev->ib_dev.modify_port = mlx4_ib_modify_port; - ibdev->ib_dev.alloc_ucontext = mlx4_ib_alloc_ucontext; - ibdev->ib_dev.dealloc_ucontext = mlx4_ib_dealloc_ucontext; - ibdev->ib_dev.mmap = mlx4_ib_mmap; - ibdev->ib_dev.alloc_pd = mlx4_ib_alloc_pd; - ibdev->ib_dev.dealloc_pd = mlx4_ib_dealloc_pd; - ibdev->ib_dev.create_ah = mlx4_ib_create_ah; - ibdev->ib_dev.query_ah = mlx4_ib_query_ah; - ibdev->ib_dev.destroy_ah = mlx4_ib_destroy_ah; - ibdev->ib_dev.create_srq = mlx4_ib_create_srq; - ibdev->ib_dev.modify_srq = mlx4_ib_modify_srq; - ibdev->ib_dev.query_srq = mlx4_ib_query_srq; - ibdev->ib_dev.destroy_srq = mlx4_ib_destroy_srq; - ibdev->ib_dev.post_srq_recv = mlx4_ib_post_srq_recv; - ibdev->ib_dev.create_qp = mlx4_ib_create_qp; - ibdev->ib_dev.modify_qp = mlx4_ib_modify_qp; - ibdev->ib_dev.query_qp = mlx4_ib_query_qp; - ibdev->ib_dev.destroy_qp = mlx4_ib_destroy_qp; - ibdev->ib_dev.drain_sq = mlx4_ib_drain_sq; - ibdev->ib_dev.drain_rq = mlx4_ib_drain_rq; - ibdev->ib_dev.post_send = mlx4_ib_post_send; - ibdev->ib_dev.post_recv = mlx4_ib_post_recv; - ibdev->ib_dev.create_cq = mlx4_ib_create_cq; - ibdev->ib_dev.modify_cq = mlx4_ib_modify_cq; - ibdev->ib_dev.resize_cq = mlx4_ib_resize_cq; - ibdev->ib_dev.destroy_cq = mlx4_ib_destroy_cq; - ibdev->ib_dev.poll_cq = mlx4_ib_poll_cq; - ibdev->ib_dev.req_notify_cq = mlx4_ib_arm_cq; - ibdev->ib_dev.get_dma_mr = mlx4_ib_get_dma_mr; - ibdev->ib_dev.reg_user_mr = mlx4_ib_reg_user_mr; - ibdev->ib_dev.rereg_user_mr = mlx4_ib_rereg_user_mr; - ibdev->ib_dev.dereg_mr = mlx4_ib_dereg_mr; - ibdev->ib_dev.alloc_mr = mlx4_ib_alloc_mr; - ibdev->ib_dev.map_mr_sg = mlx4_ib_map_mr_sg; - ibdev->ib_dev.attach_mcast = mlx4_ib_mcg_attach; - ibdev->ib_dev.detach_mcast = mlx4_ib_mcg_detach; - ibdev->ib_dev.process_mad = mlx4_ib_process_mad; - ibdev->ib_dev.get_port_immutable = mlx4_port_immutable; - ibdev->ib_dev.get_dev_fw_str = get_fw_ver_str; - ibdev->ib_dev.disassociate_ucontext = mlx4_ib_disassociate_ucontext; - + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_ops); ibdev->ib_dev.uverbs_ex_cmd_mask |= - (1ull << IB_USER_VERBS_EX_CMD_MODIFY_CQ); + (1ull << IB_USER_VERBS_EX_CMD_MODIFY_CQ) | + (1ull << IB_USER_VERBS_EX_CMD_QUERY_DEVICE) | + (1ull << IB_USER_VERBS_EX_CMD_CREATE_CQ) | + (1ull << IB_USER_VERBS_EX_CMD_CREATE_QP); if ((dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS) && ((mlx4_ib_port_link_layer(&ibdev->ib_dev, 1) == IB_LINK_LAYER_ETHERNET) || (mlx4_ib_port_link_layer(&ibdev->ib_dev, 2) == IB_LINK_LAYER_ETHERNET))) { - ibdev->ib_dev.create_wq = mlx4_ib_create_wq; - ibdev->ib_dev.modify_wq = mlx4_ib_modify_wq; - ibdev->ib_dev.destroy_wq = mlx4_ib_destroy_wq; - ibdev->ib_dev.create_rwq_ind_table = - mlx4_ib_create_rwq_ind_table; - ibdev->ib_dev.destroy_rwq_ind_table = - mlx4_ib_destroy_rwq_ind_table; ibdev->ib_dev.uverbs_ex_cmd_mask |= (1ull << IB_USER_VERBS_EX_CMD_CREATE_WQ) | (1ull << IB_USER_VERBS_EX_CMD_MODIFY_WQ) | (1ull << IB_USER_VERBS_EX_CMD_DESTROY_WQ) | (1ull << IB_USER_VERBS_EX_CMD_CREATE_RWQ_IND_TBL) | (1ull << IB_USER_VERBS_EX_CMD_DESTROY_RWQ_IND_TBL); + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_wq_ops); } - if (!mlx4_is_slave(ibdev->dev)) { - ibdev->ib_dev.alloc_fmr = mlx4_ib_fmr_alloc; - ibdev->ib_dev.map_phys_fmr = mlx4_ib_map_phys_fmr; - ibdev->ib_dev.unmap_fmr = mlx4_ib_unmap_fmr; - ibdev->ib_dev.dealloc_fmr = mlx4_ib_fmr_dealloc; - } + if (!mlx4_is_slave(ibdev->dev)) + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_fmr_ops); if (dev->caps.flags & MLX4_DEV_CAP_FLAG_MEM_WINDOW || dev->caps.bmme_flags & MLX4_BMME_FLAG_TYPE_2_WIN) { - ibdev->ib_dev.alloc_mw = mlx4_ib_alloc_mw; - ibdev->ib_dev.dealloc_mw = mlx4_ib_dealloc_mw; - ibdev->ib_dev.uverbs_cmd_mask |= (1ull << IB_USER_VERBS_CMD_ALLOC_MW) | (1ull << IB_USER_VERBS_CMD_DEALLOC_MW); + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_mw_ops); } if (dev->caps.flags & MLX4_DEV_CAP_FLAG_XRC) { - ibdev->ib_dev.alloc_xrcd = mlx4_ib_alloc_xrcd; - ibdev->ib_dev.dealloc_xrcd = mlx4_ib_dealloc_xrcd; ibdev->ib_dev.uverbs_cmd_mask |= (1ull << IB_USER_VERBS_CMD_OPEN_XRCD) | (1ull << IB_USER_VERBS_CMD_CLOSE_XRCD); + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_xrc_ops); } if (check_flow_steering_support(dev)) { ibdev->steering_support = MLX4_STEERING_MODE_DEVICE_MANAGED; - ibdev->ib_dev.create_flow = mlx4_ib_create_flow; - ibdev->ib_dev.destroy_flow = mlx4_ib_destroy_flow; - ibdev->ib_dev.uverbs_ex_cmd_mask |= (1ull << IB_USER_VERBS_EX_CMD_CREATE_FLOW) | (1ull << IB_USER_VERBS_EX_CMD_DESTROY_FLOW); + ib_set_device_ops(&ibdev->ib_dev, &mlx4_ib_dev_fs_ops); } - ibdev->ib_dev.uverbs_ex_cmd_mask |= - (1ull << IB_USER_VERBS_EX_CMD_QUERY_DEVICE) | - (1ull << IB_USER_VERBS_EX_CMD_CREATE_CQ) | - (1ull << IB_USER_VERBS_EX_CMD_CREATE_QP); - mlx4_ib_alloc_eqs(dev, ibdev); spin_lock_init(&iboe->lock);