From patchwork Tue Jun 29 20:28:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12350805 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D47BC11F68 for ; Tue, 29 Jun 2021 20:28:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 38AD461DA6 for ; Tue, 29 Jun 2021 20:28:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235420AbhF2Uau (ORCPT ); Tue, 29 Jun 2021 16:30:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235419AbhF2Uat (ORCPT ); Tue, 29 Jun 2021 16:30:49 -0400 Received: from mail-oo1-xc29.google.com (mail-oo1-xc29.google.com [IPv6:2607:f8b0:4864:20::c29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5827DC061766 for ; Tue, 29 Jun 2021 13:28:20 -0700 (PDT) Received: by mail-oo1-xc29.google.com with SMTP id s10-20020a4aeaca0000b029024c2acf6eecso4779558ooh.9 for ; Tue, 29 Jun 2021 13:28:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mQfPuel+Ld/km4ObsU2/PoY2G39293TVnNTnnhYw84s=; b=iN7u9hGX4oUrC/Aw8TGJQNXdoMtCnkmY9CKnIWeXi+SeFB1fP0/2r0pSiXIdtuYcIz 75R7GYgAWXlOFv3lQ7hspoRAB0UTd/DwrYKusYc2xyjbSogpnHVzyWWR6dm5XTWW3Aoq HKdndK9dzf6w4DBO3VmChvEVcCjbJIVXJa27cCl7THGJAadZiJNZ5pgo9wcOSkf0Q5NH QIjLxEVLw6IPiT7FcPuuzQHKOBCGl6sEKorx2H9R4auuHIj2mF65T1pY6GKmqJ8ln3MX SO5gF65tcBceC8jsWzHbvH0AsOw8aff7SE7/BtEUdX7KcozY1eS0l0Nf9EncAhOkmFnW 8kYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mQfPuel+Ld/km4ObsU2/PoY2G39293TVnNTnnhYw84s=; b=dPrTa2JDG5hTm+EPDQt+ndbx+RjbN3RGi6aDHP3Upqn1dzzQ8eXcobjvjdt8xEcwOh Dnm2Wpt3o51TXQhWMgJSwf9y8zixl56/FI8EVrctZX53hlCSzIIJJvzWBCfPKGmXu3wQ 9m0r9TyH3s7AL5IAr1/6orpUVjDARt0BOIA9VY5cX8K+Fw8sDjcZf+x2VtrNoNOdbFuw r4Hb6mH0Fxeu9Kgy71R2VZqJlhggjIwex3b6Jm/PluQtCNyspgHlT++ZbJcnSy4qy8Sh doNR7RNrggc4sjPowfC08kRxwKBZcPa58VgcoYcNF4yRWwizJwv4TL1/PbSKBH07/sfM MbTQ== X-Gm-Message-State: AOAM531nscAFBIw3XxJfxk5yXpFVGqa9C39h/K0z+N13hQyrql0nRrO+ JGZXjQ+ak56+rZCxTB/Jotc= X-Google-Smtp-Source: ABdhPJw4vIZE1ufE5UIul3kBQy0L3mWnAoDH5bbOjgRv57hYhQnlRASNd/m/2ahbDsuH31rJujPS/g== X-Received: by 2002:a4a:c3c5:: with SMTP id e5mr5555853ooq.82.1624998499556; Tue, 29 Jun 2021 13:28:19 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-2b92-ca20-93cc-e890.res6.spectrum.com. [2603:8081:140c:1a00:2b92:ca20:93cc:e890]) by smtp.gmail.com with ESMTPSA id f25sm4325481oto.26.2021.06.29.13.28.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Jun 2021 13:28:19 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next resending 1/5] RDMA/rxe: Move ICRC checking to a subroutine Date: Tue, 29 Jun 2021 15:28:01 -0500 Message-Id: <20210629202804.29403-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210629202804.29403-1-rpearsonhpe@gmail.com> References: <20210629202804.29403-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move the code in rxe_recv that checks the ICRC on incoming packets to a subroutine rxe_check_icrc() and move it to rxe_icrc.c. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_icrc.c | 38 ++++++++++++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_loc.h | 2 ++ drivers/infiniband/sw/rxe/rxe_recv.c | 23 ++--------------- 3 files changed, 42 insertions(+), 21 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_icrc.c b/drivers/infiniband/sw/rxe/rxe_icrc.c index 66b2aad54bb7..5193dfa94a75 100644 --- a/drivers/infiniband/sw/rxe/rxe_icrc.c +++ b/drivers/infiniband/sw/rxe/rxe_icrc.c @@ -67,3 +67,41 @@ u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb) rxe_opcode[pkt->opcode].length - RXE_BTH_BYTES); return crc; } + +/** + * rxe_icrc_check - Compute ICRC for a packet and compare to the ICRC + * delivered in the packet. + * @skb: The packet buffer with packet info in skb->cb[] (receive path) + * + * Returns 0 on success or an error on failure + */ +int rxe_icrc_check(struct sk_buff *skb) +{ + struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); + __be32 *icrcp; + u32 pkt_icrc; + u32 icrc; + + icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE); + pkt_icrc = be32_to_cpu(*icrcp); + + icrc = rxe_icrc_hdr(pkt, skb); + icrc = rxe_crc32(pkt->rxe, icrc, (u8 *)payload_addr(pkt), + payload_size(pkt) + bth_pad(pkt)); + icrc = (__force u32)cpu_to_be32(~icrc); + + if (unlikely(icrc != pkt_icrc)) { + if (skb->protocol == htons(ETH_P_IPV6)) + pr_warn_ratelimited("bad ICRC from %pI6c\n", + &ipv6_hdr(skb)->saddr); + else if (skb->protocol == htons(ETH_P_IP)) + pr_warn_ratelimited("bad ICRC from %pI4\n", + &ip_hdr(skb)->saddr); + else + pr_warn_ratelimited("bad ICRC from unknown\n"); + + return -EINVAL; + } + + return 0; +} diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 1ddb20855dee..6689e51647db 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -193,7 +193,9 @@ int rxe_completer(void *arg); int rxe_requester(void *arg); int rxe_responder(void *arg); +/* rxe_icrc.c */ u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb); +int rxe_icrc_check(struct sk_buff *skb); void rxe_resp_queue_pkt(struct rxe_qp *qp, struct sk_buff *skb); diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 7a49e27da23a..8582b3163e2c 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -361,8 +361,6 @@ void rxe_rcv(struct sk_buff *skb) int err; struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); struct rxe_dev *rxe = pkt->rxe; - __be32 *icrcp; - u32 calc_icrc, pack_icrc; if (unlikely(skb->len < RXE_BTH_BYTES)) goto drop; @@ -384,26 +382,9 @@ void rxe_rcv(struct sk_buff *skb) if (unlikely(err)) goto drop; - /* Verify ICRC */ - icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE); - pack_icrc = be32_to_cpu(*icrcp); - - calc_icrc = rxe_icrc_hdr(pkt, skb); - calc_icrc = rxe_crc32(rxe, calc_icrc, (u8 *)payload_addr(pkt), - payload_size(pkt) + bth_pad(pkt)); - calc_icrc = (__force u32)cpu_to_be32(~calc_icrc); - if (unlikely(calc_icrc != pack_icrc)) { - if (skb->protocol == htons(ETH_P_IPV6)) - pr_warn_ratelimited("bad ICRC from %pI6c\n", - &ipv6_hdr(skb)->saddr); - else if (skb->protocol == htons(ETH_P_IP)) - pr_warn_ratelimited("bad ICRC from %pI4\n", - &ip_hdr(skb)->saddr); - else - pr_warn_ratelimited("bad ICRC from unknown\n"); - + err = rxe_icrc_check(skb); + if (unlikely(err)) goto drop; - } rxe_counter_inc(rxe, RXE_CNT_RCVD_PKTS); From patchwork Tue Jun 29 20:28:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12350803 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00682C11F69 for ; Tue, 29 Jun 2021 20:28:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CA98F61DA6 for ; Tue, 29 Jun 2021 20:28:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234871AbhF2Uau (ORCPT ); Tue, 29 Jun 2021 16:30:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41558 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235420AbhF2Uat (ORCPT ); Tue, 29 Jun 2021 16:30:49 -0400 Received: from mail-ot1-x32d.google.com (mail-ot1-x32d.google.com [IPv6:2607:f8b0:4864:20::32d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F6B3C061767 for ; Tue, 29 Jun 2021 13:28:21 -0700 (PDT) Received: by mail-ot1-x32d.google.com with SMTP id t24-20020a9d7f980000b029046f4a1a5ec4so135425otp.1 for ; Tue, 29 Jun 2021 13:28:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FURDBWmdNtUGjZ1Z7RMoVulI+31O7fZVecU66GbrFns=; b=cPFOT+rpdNC7iaGxVDDqZDbG4tdrNTGe4a8ll3OyWsQ6a3jWg9rSVulDkM9Izy2kzc WxtiMaPCtlXHNvTdCsP2X4/z7JAN+p4Se7+VXA/rwk7ivQMrGMxyR71BbJAYkvt88kIP zYDCn2eblJokZ8m6u5LXMTSiODknLT8F/3qZZtbXuolp++NVJkmnh9vPT9cjQFQZfs/4 VIidQSsq/O3D74uZBmZGfcZ6wMASVgeLrxRVPtOisghoCk19Yt82fykFqUOwQen7gMdb bdlCvMjCU/69QXPrzKMA014vVFSduag3bOAwRM3dykIboSsDlAi8NGGJ8Q918Us8cnc/ XvAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FURDBWmdNtUGjZ1Z7RMoVulI+31O7fZVecU66GbrFns=; b=CbIM6aE3gIfQF6q41ofZvZZd3qlmwMqJAerMtFjSRySuCXEfuudc2w8Ql9bZw6bWnI glHHRuAZXCvssdoWtzUFK+WcpQB7z7+dFAHxC5NS2SE+pdAMBydin/Wiu5TU16iuJxkw 9NZZxEo34Zmfu/9xR+x5ZbYwIzGrLXTo5whdQ+l+WAw9CuOw2lsEgRcRa/iEimrX9iZI TPd6xX/WpgRtFIEFjWEKVHpGpe+jJuod3PssQAcpvX8zVECx/jKYnqmXMgZjMSb0bp8R 2TWYlN1OHmOV0WkYNzlUmy0O+RNBLRwOFG+wfARFfWJKuDWJ2ArO3kOZyiqmynz2e1QB xbiQ== X-Gm-Message-State: AOAM533JKlt3j5FxBCRGZqOxVmJ+HbXf7MLcqCV1Koys9seyafoORbsm X82Cs2C9aiRpu02odIdTgaHd2t3/B5w= X-Google-Smtp-Source: ABdhPJwCwx+KDzg+/uG1HnjELBiByDhN+aL+/GDYWuiBC00fNUhwK94Mbks/3+ZiMBYxcejndvSEFA== X-Received: by 2002:a05:6830:60b:: with SMTP id w11mr3081202oti.117.1624998500502; Tue, 29 Jun 2021 13:28:20 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-2b92-ca20-93cc-e890.res6.spectrum.com. [2603:8081:140c:1a00:2b92:ca20:93cc:e890]) by smtp.gmail.com with ESMTPSA id q26sm4418195ota.20.2021.06.29.13.28.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Jun 2021 13:28:20 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next resending 2/5] RDMA/rxe: Move rxe_xmit_packet to a subroutine Date: Tue, 29 Jun 2021 15:28:02 -0500 Message-Id: <20210629202804.29403-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210629202804.29403-1-rpearsonhpe@gmail.com> References: <20210629202804.29403-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org rxe_xmit_packet() was an overlong inline subroutine. Move it into rxe_net.c as an ordinary subroutine. Change rxe_loopback() and rxe_send() to static subroutines since they are no longer shared. Allow rxe_loopback to return an error. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 47 ++----------------------- drivers/infiniband/sw/rxe/rxe_net.c | 54 ++++++++++++++++++++++++++--- 2 files changed, 51 insertions(+), 50 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 6689e51647db..3468a61efe4e 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -99,11 +99,11 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey); void rxe_mw_cleanup(struct rxe_pool_entry *arg); /* rxe_net.c */ -void rxe_loopback(struct sk_buff *skb); -int rxe_send(struct rxe_pkt_info *pkt, struct sk_buff *skb); struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, int paylen, struct rxe_pkt_info *pkt); int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb, u32 *crc); +int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, + struct sk_buff *skb); const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num); int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid); int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid); @@ -206,47 +206,4 @@ static inline unsigned int wr_opcode_mask(int opcode, struct rxe_qp *qp) return rxe_wr_opcode_info[opcode].mask[qp->ibqp.qp_type]; } -static inline int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, - struct sk_buff *skb) -{ - int err; - int is_request = pkt->mask & RXE_REQ_MASK; - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); - - if ((is_request && (qp->req.state != QP_STATE_READY)) || - (!is_request && (qp->resp.state != QP_STATE_READY))) { - pr_info("Packet dropped. QP is not in ready state\n"); - goto drop; - } - - if (pkt->mask & RXE_LOOPBACK_MASK) { - memcpy(SKB_TO_PKT(skb), pkt, sizeof(*pkt)); - rxe_loopback(skb); - err = 0; - } else { - err = rxe_send(pkt, skb); - } - - if (err) { - rxe->xmit_errors++; - rxe_counter_inc(rxe, RXE_CNT_SEND_ERR); - return err; - } - - if ((qp_type(qp) != IB_QPT_RC) && - (pkt->mask & RXE_END_MASK)) { - pkt->wqe->state = wqe_state_done; - rxe_run_task(&qp->comp.task, 1); - } - - rxe_counter_inc(rxe, RXE_CNT_SENT_PKTS); - goto done; - -drop: - kfree_skb(skb); - err = 0; -done: - return err; -} - #endif /* RXE_LOC_H */ diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index dec92928a1cd..6968c247bcf7 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -373,7 +373,7 @@ static void rxe_skb_tx_dtor(struct sk_buff *skb) rxe_drop_ref(qp); } -int rxe_send(struct rxe_pkt_info *pkt, struct sk_buff *skb) +static int rxe_send(struct rxe_pkt_info *pkt, struct sk_buff *skb) { int err; @@ -406,19 +406,63 @@ int rxe_send(struct rxe_pkt_info *pkt, struct sk_buff *skb) /* fix up a send packet to match the packets * received from UDP before looping them back */ -void rxe_loopback(struct sk_buff *skb) +static int rxe_loopback(struct rxe_pkt_info *pkt, struct sk_buff *skb) { - struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); + memcpy(SKB_TO_PKT(skb), pkt, sizeof(*pkt)); if (skb->protocol == htons(ETH_P_IP)) skb_pull(skb, sizeof(struct iphdr)); else skb_pull(skb, sizeof(struct ipv6hdr)); - if (WARN_ON(!ib_device_try_get(&pkt->rxe->ib_dev))) + if (WARN_ON(!ib_device_try_get(&pkt->rxe->ib_dev))) { kfree_skb(skb); + return -EIO; + } + + rxe_rcv(skb); + + return 0; +} + +int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, + struct sk_buff *skb) +{ + int is_request = pkt->mask & RXE_REQ_MASK; + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + int err; + + if ((is_request && (qp->req.state != QP_STATE_READY)) || + (!is_request && (qp->resp.state != QP_STATE_READY))) { + pr_info("Packet dropped. QP is not in ready state\n"); + goto drop; + } + + if (pkt->mask & RXE_LOOPBACK_MASK) + err = rxe_loopback(pkt, skb); else - rxe_rcv(skb); + err = rxe_send(pkt, skb); + + if (err) { + rxe->xmit_errors++; + rxe_counter_inc(rxe, RXE_CNT_SEND_ERR); + return err; + } + + if ((qp_type(qp) != IB_QPT_RC) && + (pkt->mask & RXE_END_MASK)) { + pkt->wqe->state = wqe_state_done; + rxe_run_task(&qp->comp.task, 1); + } + + rxe_counter_inc(rxe, RXE_CNT_SENT_PKTS); + goto done; + +drop: + kfree_skb(skb); + err = 0; +done: + return err; } struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, From patchwork Tue Jun 29 20:28:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12350807 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB030C11F67 for ; Tue, 29 Jun 2021 20:28:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CADFE61DC0 for ; Tue, 29 Jun 2021 20:28:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235433AbhF2Uav (ORCPT ); Tue, 29 Jun 2021 16:30:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235429AbhF2Uau (ORCPT ); Tue, 29 Jun 2021 16:30:50 -0400 Received: from mail-ot1-x333.google.com (mail-ot1-x333.google.com [IPv6:2607:f8b0:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0368DC061768 for ; Tue, 29 Jun 2021 13:28:22 -0700 (PDT) Received: by mail-ot1-x333.google.com with SMTP id h24-20020a9d64180000b029036edcf8f9a6so122091otl.3 for ; Tue, 29 Jun 2021 13:28:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1ZP8OQV1IW6mwxiCTa9PrhgGhzRq3ovPPPsyQVW7nmQ=; b=AfnvRnBEL/0USVgMRZgkaP1NoXsLeqUdIeBTzin9O5XkWUPA7fRd6reu8XbXqDPsWL 6ZTUfSBHdIiffX5LzX+hRm6/LlwT5Ksru1hSbYZB4zXjy4bmV36LuUmcDEBtPD66Dj4F 50Vtohaw/nRmsyDy2s1i5AeJxWpbJIRnetBQz0dzczP0Ou1onSXJLFulgr8yEND2GSGA a9LDJyCoqoQ0CdkxJPHUAGM9WC1WU7yJ7VFPPk4taEljAjuCObBFREaL9csysS1cbPvu 4jBMD2ntSooI4xkkHtOEjJHV70l0QejOf79pwVjPJzgARSkFQHcoRIPYK1UaiKJNPDxy oGwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1ZP8OQV1IW6mwxiCTa9PrhgGhzRq3ovPPPsyQVW7nmQ=; b=TfLYHdKSDzlbvdL5O031oFfspseYLc+86n6QRp/LmwZiV/aLVhM+XVDNnpKThareIF Q94RD/lmCy37HJIle0GIZQI6vZWlOgBI5AFohGHuPIyS5/vO+ZLyNccij5vpPw0TrqY0 XXwXL05g6ZbrskFJay6bmtLyj+ChqbuCSHC0p8RYqX8nN8UfM8Z0OvPIFT7r1g175qBT ilEZIACn5qKcOjWGilpQ9nLupaDa2W6KU2ak3NKxAV4VegPQ2JsvUvSHG6i65xN4v2ih ADvNnIfEBQ4l0/6LiVIBLgJkVIC1Gl3rsDxpUFDOD2+KPa9yvwdCNkiaZROzKWi8OW0B tFjw== X-Gm-Message-State: AOAM530b97Pv67THVbs74Oc/9XvVi0ju9P876lYmT9l7Wqka+ZQfKfvC zZSoPByWTfPQzs3InOzbTo8YaiZjnYE= X-Google-Smtp-Source: ABdhPJxe4t15yYlsGB+0WP0/7sqV0+QQrqTsW5wmJFnBnPs/7J/h1tdkqnrvl5W+55dbE9quEzlZ2g== X-Received: by 2002:a9d:61d0:: with SMTP id h16mr6220898otk.124.1624998501369; Tue, 29 Jun 2021 13:28:21 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-2b92-ca20-93cc-e890.res6.spectrum.com. [2603:8081:140c:1a00:2b92:ca20:93cc:e890]) by smtp.gmail.com with ESMTPSA id o25sm4040899ood.20.2021.06.29.13.28.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Jun 2021 13:28:21 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next resending 3/5] RDMA/rxe: Move ICRC generation to a subroutine Date: Tue, 29 Jun 2021 15:28:03 -0500 Message-Id: <20210629202804.29403-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210629202804.29403-1-rpearsonhpe@gmail.com> References: <20210629202804.29403-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Isolate ICRC generation into a single subroutine named rxe_generate_icrc() in rxe_icrc.c. Remove scattered crc generation code from elsewhere. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 4 +-- drivers/infiniband/sw/rxe/rxe_icrc.c | 13 ++++++++ drivers/infiniband/sw/rxe/rxe_loc.h | 10 +++--- drivers/infiniband/sw/rxe/rxe_mr.c | 47 ++++++++++++---------------- drivers/infiniband/sw/rxe/rxe_net.c | 8 ++--- drivers/infiniband/sw/rxe/rxe_req.c | 13 ++------ drivers/infiniband/sw/rxe/rxe_resp.c | 33 +++++-------------- 7 files changed, 54 insertions(+), 74 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index 58ad9c2644f3..d2d802c776fd 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -349,7 +349,7 @@ static inline enum comp_state do_read(struct rxe_qp *qp, ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &wqe->dma, payload_addr(pkt), - payload_size(pkt), RXE_TO_MR_OBJ, NULL); + payload_size(pkt), RXE_TO_MR_OBJ); if (ret) { wqe->status = IB_WC_LOC_PROT_ERR; return COMPST_ERROR; @@ -371,7 +371,7 @@ static inline enum comp_state do_atomic(struct rxe_qp *qp, ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &wqe->dma, &atomic_orig, - sizeof(u64), RXE_TO_MR_OBJ, NULL); + sizeof(u64), RXE_TO_MR_OBJ); if (ret) { wqe->status = IB_WC_LOC_PROT_ERR; return COMPST_ERROR; diff --git a/drivers/infiniband/sw/rxe/rxe_icrc.c b/drivers/infiniband/sw/rxe/rxe_icrc.c index 5193dfa94a75..5424b8bea908 100644 --- a/drivers/infiniband/sw/rxe/rxe_icrc.c +++ b/drivers/infiniband/sw/rxe/rxe_icrc.c @@ -105,3 +105,16 @@ int rxe_icrc_check(struct sk_buff *skb) return 0; } + +/* rxe_icrc_generate- compute ICRC for a packet. */ +void rxe_icrc_generate(struct rxe_pkt_info *pkt, struct sk_buff *skb) +{ + __be32 *icrcp; + u32 icrc; + + icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE); + icrc = rxe_icrc_hdr(pkt, skb); + icrc = rxe_crc32(pkt->rxe, icrc, (u8 *)payload_addr(pkt), + payload_size(pkt) + bth_pad(pkt)); + *icrcp = ~icrc; +} diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 3468a61efe4e..2c724b9970d6 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -77,10 +77,9 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova, int access, struct rxe_mr *mr); int rxe_mr_init_fast(struct rxe_pd *pd, int max_pages, struct rxe_mr *mr); int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, - enum rxe_mr_copy_dir dir, u32 *crcp); -int copy_data(struct rxe_pd *pd, int access, - struct rxe_dma_info *dma, void *addr, int length, - enum rxe_mr_copy_dir dir, u32 *crcp); + enum rxe_mr_copy_dir dir); +int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, + void *addr, int length, enum rxe_mr_copy_dir dir); void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length); struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type); @@ -101,7 +100,7 @@ void rxe_mw_cleanup(struct rxe_pool_entry *arg); /* rxe_net.c */ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, int paylen, struct rxe_pkt_info *pkt); -int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb, u32 *crc); +int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb); int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct sk_buff *skb); const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num); @@ -196,6 +195,7 @@ int rxe_responder(void *arg); /* rxe_icrc.c */ u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb); int rxe_icrc_check(struct sk_buff *skb); +void rxe_icrc_generate(struct rxe_pkt_info *pkt, struct sk_buff *skb); void rxe_resp_queue_pkt(struct rxe_qp *qp, struct sk_buff *skb); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 6aabcb4de235..f94fd143e27b 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -279,11 +279,10 @@ void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length) } /* copy data from a range (vaddr, vaddr+length-1) to or from - * a mr object starting at iova. Compute incremental value of - * crc32 if crcp is not zero. caller must hold a reference to mr + * a mr object starting at iova. */ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, - enum rxe_mr_copy_dir dir, u32 *crcp) + enum rxe_mr_copy_dir dir) { int err; int bytes; @@ -293,24 +292,23 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, int m; int i; size_t offset; - u32 crc = crcp ? (*crcp) : 0; + u8 *src; + u8 *dest; if (length == 0) return 0; if (mr->type == RXE_MR_TYPE_DMA) { - u8 *src, *dest; - - src = (dir == RXE_TO_MR_OBJ) ? addr : ((void *)(uintptr_t)iova); - - dest = (dir == RXE_TO_MR_OBJ) ? ((void *)(uintptr_t)iova) : addr; + if (dir == RXE_TO_MR_OBJ) { + src = addr; + dest = ((void *)(uintptr_t)iova); + } else { + src = ((void *)(uintptr_t)iova); + dest = addr; + } memcpy(dest, src, length); - if (crcp) - *crcp = rxe_crc32(to_rdev(mr->ibmr.device), *crcp, dest, - length); - return 0; } @@ -328,11 +326,14 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, buf = map[0]->buf + i; while (length > 0) { - u8 *src, *dest; - va = (u8 *)(uintptr_t)buf->addr + offset; - src = (dir == RXE_TO_MR_OBJ) ? addr : va; - dest = (dir == RXE_TO_MR_OBJ) ? va : addr; + if (dir == RXE_TO_MR_OBJ) { + src = addr; + dest = va; + } else { + src = va; + dest = addr; + } bytes = buf->size - offset; @@ -341,10 +342,6 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, memcpy(dest, src, bytes); - if (crcp) - crc = rxe_crc32(to_rdev(mr->ibmr.device), crc, dest, - bytes); - length -= bytes; addr += bytes; @@ -359,9 +356,6 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, } } - if (crcp) - *crcp = crc; - return 0; err1: @@ -377,8 +371,7 @@ int copy_data( struct rxe_dma_info *dma, void *addr, int length, - enum rxe_mr_copy_dir dir, - u32 *crcp) + enum rxe_mr_copy_dir dir) { int bytes; struct rxe_sge *sge = &dma->sge[dma->cur_sge]; @@ -439,7 +432,7 @@ int copy_data( if (bytes > 0) { iova = sge->addr + offset; - err = rxe_mr_copy(mr, iova, addr, bytes, dir, crcp); + err = rxe_mr_copy(mr, iova, addr, bytes, dir); if (err) goto err2; diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index 6968c247bcf7..ffbe8f95405e 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -343,7 +343,7 @@ static int prepare6(struct rxe_pkt_info *pkt, struct sk_buff *skb) return 0; } -int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb, u32 *crc) +int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb) { int err = 0; @@ -352,8 +352,6 @@ int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb, u32 *crc) else if (skb->protocol == htons(ETH_P_IPV6)) err = prepare6(pkt, skb); - *crc = rxe_icrc_hdr(pkt, skb); - if (ether_addr_equal(skb->dev->dev_addr, rxe_get_av(pkt)->dmac)) pkt->mask |= RXE_LOOPBACK_MASK; @@ -396,7 +394,7 @@ static int rxe_send(struct rxe_pkt_info *pkt, struct sk_buff *skb) } if (unlikely(net_xmit_eval(err))) { - pr_debug("error sending packet: %d\n", err); + pr_info("error sending packet: %d\n", err); return -EAGAIN; } @@ -438,6 +436,8 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, goto drop; } + rxe_icrc_generate(pkt, skb); + if (pkt->mask & RXE_LOOPBACK_MASK) err = rxe_loopback(pkt, skb); else diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index c57699cc6578..3894197a82f6 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -466,12 +466,9 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe, struct rxe_pkt_info *pkt, struct sk_buff *skb, int paylen) { - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); - u32 crc = 0; - u32 *p; int err; - err = rxe_prepare(pkt, skb, &crc); + err = rxe_prepare(pkt, skb); if (err) return err; @@ -479,7 +476,6 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe, if (wqe->wr.send_flags & IB_SEND_INLINE) { u8 *tmp = &wqe->dma.inline_data[wqe->dma.sge_offset]; - crc = rxe_crc32(rxe, crc, tmp, paylen); memcpy(payload_addr(pkt), tmp, paylen); wqe->dma.resid -= paylen; @@ -487,8 +483,7 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe, } else { err = copy_data(qp->pd, 0, &wqe->dma, payload_addr(pkt), paylen, - RXE_FROM_MR_OBJ, - &crc); + RXE_FROM_MR_OBJ); if (err) return err; } @@ -496,12 +491,8 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe, u8 *pad = payload_addr(pkt) + paylen; memset(pad, 0, bth_pad(pkt)); - crc = rxe_crc32(rxe, crc, pad, bth_pad(pkt)); } } - p = payload_addr(pkt) + paylen + bth_pad(pkt); - - *p = ~crc; return 0; } diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 3743dc39b60c..685b8aebd627 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -536,7 +536,7 @@ static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, int err; err = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, - data_addr, data_len, RXE_TO_MR_OBJ, NULL); + data_addr, data_len, RXE_TO_MR_OBJ); if (unlikely(err)) return (err == -ENOSPC) ? RESPST_ERR_LENGTH : RESPST_ERR_MALFORMED_WQE; @@ -552,7 +552,7 @@ static enum resp_states write_data_in(struct rxe_qp *qp, int data_len = payload_size(pkt); err = rxe_mr_copy(qp->resp.mr, qp->resp.va + qp->resp.offset, - payload_addr(pkt), data_len, RXE_TO_MR_OBJ, NULL); + payload_addr(pkt), data_len, RXE_TO_MR_OBJ); if (err) { rc = RESPST_ERR_RKEY_VIOLATION; goto out; @@ -613,13 +613,10 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, int opcode, int payload, u32 psn, - u8 syndrome, - u32 *crcp) + u8 syndrome) { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct sk_buff *skb; - u32 crc = 0; - u32 *p; int paylen; int pad; int err; @@ -651,20 +648,12 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, if (ack->mask & RXE_ATMACK_MASK) atmack_set_orig(ack, qp->resp.atomic_orig); - err = rxe_prepare(ack, skb, &crc); + err = rxe_prepare(ack, skb); if (err) { kfree_skb(skb); return NULL; } - if (crcp) { - /* CRC computation will be continued by the caller */ - *crcp = crc; - } else { - p = payload_addr(ack) + payload + bth_pad(ack); - *p = ~crc; - } - return skb; } @@ -682,8 +671,6 @@ static enum resp_states read_reply(struct rxe_qp *qp, int opcode; int err; struct resp_res *res = qp->resp.res; - u32 icrc; - u32 *p; if (!res) { /* This is the first time we process that request. Get a @@ -742,24 +729,20 @@ static enum resp_states read_reply(struct rxe_qp *qp, payload = min_t(int, res->read.resid, mtu); skb = prepare_ack_packet(qp, req_pkt, &ack_pkt, opcode, payload, - res->cur_psn, AETH_ACK_UNLIMITED, &icrc); + res->cur_psn, AETH_ACK_UNLIMITED); if (!skb) return RESPST_ERR_RNR; err = rxe_mr_copy(res->read.mr, res->read.va, payload_addr(&ack_pkt), - payload, RXE_FROM_MR_OBJ, &icrc); + payload, RXE_FROM_MR_OBJ); if (err) pr_err("Failed copying memory\n"); if (bth_pad(&ack_pkt)) { - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); u8 *pad = payload_addr(&ack_pkt) + payload; memset(pad, 0, bth_pad(&ack_pkt)); - icrc = rxe_crc32(rxe, icrc, pad, bth_pad(&ack_pkt)); } - p = payload_addr(&ack_pkt) + payload + bth_pad(&ack_pkt); - *p = ~icrc; err = rxe_xmit_packet(qp, &ack_pkt, skb); if (err) { @@ -984,7 +967,7 @@ static int send_ack(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct sk_buff *skb; skb = prepare_ack_packet(qp, pkt, &ack_pkt, IB_OPCODE_RC_ACKNOWLEDGE, - 0, psn, syndrome, NULL); + 0, psn, syndrome); if (!skb) { err = -ENOMEM; goto err1; @@ -1008,7 +991,7 @@ static int send_atomic_ack(struct rxe_qp *qp, struct rxe_pkt_info *pkt, skb = prepare_ack_packet(qp, pkt, &ack_pkt, IB_OPCODE_RC_ATOMIC_ACKNOWLEDGE, 0, pkt->psn, - syndrome, NULL); + syndrome); if (!skb) { rc = -ENOMEM; goto out; From patchwork Tue Jun 29 20:28:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12350809 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29C4CC11F6A for ; Tue, 29 Jun 2021 20:28:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 124BD61DC0 for ; Tue, 29 Jun 2021 20:28:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235441AbhF2Uaw (ORCPT ); Tue, 29 Jun 2021 16:30:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41564 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235419AbhF2Uav (ORCPT ); Tue, 29 Jun 2021 16:30:51 -0400 Received: from mail-oo1-xc2d.google.com (mail-oo1-xc2d.google.com [IPv6:2607:f8b0:4864:20::c2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A960C061760 for ; Tue, 29 Jun 2021 13:28:22 -0700 (PDT) Received: by mail-oo1-xc2d.google.com with SMTP id 128-20020a4a11860000b029024b19a4d98eso6029966ooc.5 for ; Tue, 29 Jun 2021 13:28:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ts0M4A+kaV/PqWUKIrOiU8P0d0m2308UEHxNlk5yfDU=; b=B1ZCAbezF8PUfroHIgi7+7DAKAc6t0+T4JfyG9sbITGZhxZBMsfMbP6LChq/Bwl50i ARudn1+dPWi1cmENOmNC6X7+7NEIYdlAGmxLKrri+hINiB8RGTnLXR4Vilcyf0RiDOVn A7jYg+hYpGy/wH6mYxNVcpJxsC1oRhOWOj67mU91I8jF8Gu7DKpz6HMsFuYzU2rWyjcv sftIo1Jq3gY0tzzJr5s/wgqBqtoVS+xdCC45nKZZnT+VQ9ZeL7ZoCJKOgOtNNYqCJHr6 2DUq0qKi4JQO1U/Rpr7vvWUpjfSoQrlTkNhTt/xwaW6FOrTpwbR4dhxXtGrIXH3kkPhi 6GVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ts0M4A+kaV/PqWUKIrOiU8P0d0m2308UEHxNlk5yfDU=; b=MMb551jEvB6h+d/D8IEEb72WRa8DWp0xEjNw/y5BPM0YhAmmU5gzNFAibZJQGZYKtO 0PuBJSV3rFjA8S5sjqlF0OC07xs4GUP0LLXzHksHT/MTYJchhaxufh9mPfWfINLz7RYx PAf37orcurKzbvm7FyIUEWPttKyVtf5aaN1nBZpeHE/G6SLt2C28+xtBTrIvJiS5GYfT x0BfGdHUcltEcWPR1ZlsN9fUTPwylnyJzB9kikQBlA4ejwEjfIZnTqH/clIbLOCq1MIc 87rDu4uSmqRI6Kku+UEpAz/JXdusQ9iRu58FSzHBGyG6jCKqjHWuB7oH1qfxaGqU1/AB 5u9w== X-Gm-Message-State: AOAM533A6Ukk/vHP0HSP0UAn0PkkVB40sY6CCNWzVD7Y7CBGNuK/nRUR QyZ4kQXUt8o7Spfdapp2yss= X-Google-Smtp-Source: ABdhPJx1qPd5WiIkKJxJ9N8pWzUvFyO79WutJ6FRO9z2F8eOnafaYjmHMXsHBHK41lz2kYEN0p6G9A== X-Received: by 2002:a4a:ea8d:: with SMTP id r13mr5465260ooh.7.1624998502064; Tue, 29 Jun 2021 13:28:22 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-2b92-ca20-93cc-e890.res6.spectrum.com. [2603:8081:140c:1a00:2b92:ca20:93cc:e890]) by smtp.gmail.com with ESMTPSA id h2sm3814854oog.16.2021.06.29.13.28.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Jun 2021 13:28:21 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next resending 4/5] RDMA/rxe: Move rxe_crc32 to a subroutine Date: Tue, 29 Jun 2021 15:28:04 -0500 Message-Id: <20210629202804.29403-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210629202804.29403-1-rpearsonhpe@gmail.com> References: <20210629202804.29403-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move rxe_crc32 from rxe.h to rxe_icrc.c as a static local function. Add some comments to rxe_icrc.c Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.h | 21 ------------ drivers/infiniband/sw/rxe/rxe_icrc.c | 50 +++++++++++++++++++++++++--- drivers/infiniband/sw/rxe/rxe_loc.h | 1 - 3 files changed, 45 insertions(+), 27 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.h b/drivers/infiniband/sw/rxe/rxe.h index 623fd17df02d..65a73c1c8b35 100644 --- a/drivers/infiniband/sw/rxe/rxe.h +++ b/drivers/infiniband/sw/rxe/rxe.h @@ -42,27 +42,6 @@ extern bool rxe_initialized; -static inline u32 rxe_crc32(struct rxe_dev *rxe, - u32 crc, void *next, size_t len) -{ - u32 retval; - int err; - - SHASH_DESC_ON_STACK(shash, rxe->tfm); - - shash->tfm = rxe->tfm; - *(u32 *)shash_desc_ctx(shash) = crc; - err = crypto_shash_update(shash, next, len); - if (unlikely(err)) { - pr_warn_ratelimited("failed crc calculation, err: %d\n", err); - return crc32_le(crc, next, len); - } - - retval = *(u32 *)shash_desc_ctx(shash); - barrier_data(shash_desc_ctx(shash)); - return retval; -} - void rxe_set_mtu(struct rxe_dev *rxe, unsigned int dev_mtu); int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name); diff --git a/drivers/infiniband/sw/rxe/rxe_icrc.c b/drivers/infiniband/sw/rxe/rxe_icrc.c index 5424b8bea908..e116c63d7b84 100644 --- a/drivers/infiniband/sw/rxe/rxe_icrc.c +++ b/drivers/infiniband/sw/rxe/rxe_icrc.c @@ -7,8 +7,44 @@ #include "rxe.h" #include "rxe_loc.h" -/* Compute a partial ICRC for all the IB transport headers. */ -u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb) +/** + * rxe_crc32 - Compute incremental crc32 for a contiguous segment + * @rxe: rdma_rxe device object + * @crc: starting crc32 value from previous segments + * @addr: starting address of segment + * @len: length of the segment in bytes + * + * Returns the crc32 checksum of the segment starting from crc. + */ +static u32 rxe_crc32(struct rxe_dev *rxe, u32 crc, void *addr, size_t len) +{ + u32 icrc; + int err; + + SHASH_DESC_ON_STACK(shash, rxe->tfm); + + shash->tfm = rxe->tfm; + *(u32 *)shash_desc_ctx(shash) = crc; + err = crypto_shash_update(shash, addr, len); + if (unlikely(err)) { + pr_warn_ratelimited("failed crc calculation, err: %d\n", err); + return crc32_le(crc, addr, len); + } + + icrc = *(u32 *)shash_desc_ctx(shash); + barrier_data(shash_desc_ctx(shash)); + + return icrc; +} + +/** + * rxe_icrc_hdr - Compute a partial ICRC for the IB transport headers. + * @pkt: Information about the current packet + * @skb: The packet buffer + * + * Returns the partial ICRC + */ +static u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb) { unsigned int bth_offset = 0; struct iphdr *ip4h = NULL; @@ -71,9 +107,9 @@ u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb) /** * rxe_icrc_check - Compute ICRC for a packet and compare to the ICRC * delivered in the packet. - * @skb: The packet buffer with packet info in skb->cb[] (receive path) + * @skb: packet buffer with packet info in skb->cb[] (receive path) * - * Returns 0 on success or an error on failure + * Returns 0 if the ICRCs match or an error on failure */ int rxe_icrc_check(struct sk_buff *skb) { @@ -106,7 +142,11 @@ int rxe_icrc_check(struct sk_buff *skb) return 0; } -/* rxe_icrc_generate- compute ICRC for a packet. */ +/** + * rxe_icrc_generate - Compute ICRC for a packet. + * @pkt: packet information + * @skb: packet buffer + */ void rxe_icrc_generate(struct rxe_pkt_info *pkt, struct sk_buff *skb) { __be32 *icrcp; diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 2c724b9970d6..b08689b664ec 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -193,7 +193,6 @@ int rxe_requester(void *arg); int rxe_responder(void *arg); /* rxe_icrc.c */ -u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb); int rxe_icrc_check(struct sk_buff *skb); void rxe_icrc_generate(struct rxe_pkt_info *pkt, struct sk_buff *skb); From patchwork Tue Jun 29 20:28:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12350811 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99C4DC11F68 for ; Tue, 29 Jun 2021 20:28:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 876BF61DC1 for ; Tue, 29 Jun 2021 20:28:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235419AbhF2Uaw (ORCPT ); Tue, 29 Jun 2021 16:30:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235430AbhF2Uav (ORCPT ); Tue, 29 Jun 2021 16:30:51 -0400 Received: from mail-oi1-x22f.google.com (mail-oi1-x22f.google.com [IPv6:2607:f8b0:4864:20::22f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47640C061766 for ; Tue, 29 Jun 2021 13:28:23 -0700 (PDT) Received: by mail-oi1-x22f.google.com with SMTP id 22so218306oix.10 for ; Tue, 29 Jun 2021 13:28:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=grubGUbBuvqyqkCPcMcJ1aRrc0D4euuSFNCNaoiFuoI=; b=CPYjfuLZt6DAlGkktCwHI0FYmLb/IssaPvxCCdMy1rUCjEpYo7xXG4ynm8PfCwhIMb yIh85c3n3O+OZE81O7VdxPRxR3rSSvNT7l+6AzlbV7GYQQJddOY4MhxGsSKMc/z8g2W1 UxDwSZw8Im+3lswlnzykPdyohkZA1cp4UowtHbitrTkkvf2OhvzDvSTGta4/dxwFx7RJ uSjpSfHUxDvpFMSdUE8YxOJfnsVaA2FuUslrBARlxKP6GbaM41rS8W7hFl0EHmSR8BAB sAAzRx9hAYGKdhMZvvHBw2obygEsr6sgMgSmZY5i+rAxU+DfnVznFC0XzSD7pZ/nn+sY 5Jjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=grubGUbBuvqyqkCPcMcJ1aRrc0D4euuSFNCNaoiFuoI=; b=CTxuBhW6+1ILKKNu1RjqFvSil8EggZCI7WpZCPwHFgdWNbmNcvVlucTbERBQ3tmePW jnrp35dJFbe/qzQfZS1SQsMWeJ2pMzF1kVwuK6NMfzwnD4W5WR57OLDGvFJRYCNCagO5 KHPwcqyf0Zzgo68NHSaBdeA0N6NiQ76PftEmDtS/zOwVlN/RHPV36xbrZnd5Wunf4DNZ OVrekQ3ilBvEjyu5WMkoa6ftVuPkzlwvNMp7ainZYH+Rh0GkB6TGI7XjkJr3QsgEZmNi SGsOkpN9I+GzqGjq2q2mmO7kMrq1x8EpUt5ch2B496l2JQqgabPX2AeC6IwPCDl6B2Pe dOPg== X-Gm-Message-State: AOAM5300rWXx8Wio1co8IQDlWXyBYrS2WqI2EIhp2RB106GTqHgtW/VW coZOR7Zk9nc1bsmaQ3HpziiDWU7A7E8= X-Google-Smtp-Source: ABdhPJw4tQ9Q3iaK9AhieIZmEjbvpg1RTfE9W6n56pUjLUhn/GVNBRHRE/mbO+D5m1IThjSmk8xYtw== X-Received: by 2002:aca:4143:: with SMTP id o64mr22810594oia.105.1624998502686; Tue, 29 Jun 2021 13:28:22 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-2b92-ca20-93cc-e890.res6.spectrum.com. [2603:8081:140c:1a00:2b92:ca20:93cc:e890]) by smtp.gmail.com with ESMTPSA id c4sm1107368ots.15.2021.06.29.13.28.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Jun 2021 13:28:22 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next resending 5/5] RDMA/rxe: Move crc32 init code to rxe_icrc.c Date: Tue, 29 Jun 2021 15:28:05 -0500 Message-Id: <20210629202804.29403-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210629202804.29403-1-rpearsonhpe@gmail.com> References: <20210629202804.29403-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org This patch collects the code from rxe_register_device() that sets up the crc32 calculation into a subroutine rxe_icrc_init() in rxe_icrc.c. This completes collecting all the code specific to computing ICRC into one file with a simple set of APIs. Minor cleanups in rxe_icrc.c to Comments byte order types Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.h | 1 - drivers/infiniband/sw/rxe/rxe_icrc.c | 75 +++++++++++++++++---------- drivers/infiniband/sw/rxe/rxe_loc.h | 1 + drivers/infiniband/sw/rxe/rxe_verbs.c | 11 ++-- 4 files changed, 53 insertions(+), 35 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.h b/drivers/infiniband/sw/rxe/rxe.h index 65a73c1c8b35..1bb3fb618bf5 100644 --- a/drivers/infiniband/sw/rxe/rxe.h +++ b/drivers/infiniband/sw/rxe/rxe.h @@ -14,7 +14,6 @@ #include #include -#include #include #include diff --git a/drivers/infiniband/sw/rxe/rxe_icrc.c b/drivers/infiniband/sw/rxe/rxe_icrc.c index e116c63d7b84..4f311798d682 100644 --- a/drivers/infiniband/sw/rxe/rxe_icrc.c +++ b/drivers/infiniband/sw/rxe/rxe_icrc.c @@ -4,34 +4,59 @@ * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved. */ +#include #include "rxe.h" #include "rxe_loc.h" /** - * rxe_crc32 - Compute incremental crc32 for a contiguous segment + * rxe_icrc_init - Initialize crypto function for computing crc32 + * @rxe: rdma_rxe device object + * + * Returns 0 on success else an error + */ +int rxe_icrc_init(struct rxe_dev *rxe) +{ + struct crypto_shash *tfm; + + tfm = crypto_alloc_shash("crc32", 0, 0); + if (IS_ERR(tfm)) { + pr_err("failed to init crc32 algorithm err:%ld\n", + PTR_ERR(tfm)); + return PTR_ERR(tfm); + } + + rxe->tfm = tfm; + + return 0; +} + +/** + * rxe_crc32 - Compute cumulative crc32 for a contiguous segment * @rxe: rdma_rxe device object * @crc: starting crc32 value from previous segments * @addr: starting address of segment * @len: length of the segment in bytes * - * Returns the crc32 checksum of the segment starting from crc. + * Returns the crc32 cumulative checksum including the segment starting + * from crc. */ -static u32 rxe_crc32(struct rxe_dev *rxe, u32 crc, void *addr, size_t len) +static __be32 rxe_crc32(struct rxe_dev *rxe, __be32 crc, void *addr, + size_t len) { - u32 icrc; + __be32 icrc; int err; SHASH_DESC_ON_STACK(shash, rxe->tfm); shash->tfm = rxe->tfm; - *(u32 *)shash_desc_ctx(shash) = crc; + *(__be32 *)shash_desc_ctx(shash) = crc; err = crypto_shash_update(shash, addr, len); if (unlikely(err)) { pr_warn_ratelimited("failed crc calculation, err: %d\n", err); return crc32_le(crc, addr, len); } - icrc = *(u32 *)shash_desc_ctx(shash); + icrc = *(__be32 *)shash_desc_ctx(shash); barrier_data(shash_desc_ctx(shash)); return icrc; @@ -39,19 +64,16 @@ static u32 rxe_crc32(struct rxe_dev *rxe, u32 crc, void *addr, size_t len) /** * rxe_icrc_hdr - Compute a partial ICRC for the IB transport headers. - * @pkt: Information about the current packet - * @skb: The packet buffer + * @pkt: packet information + * @skb: packet buffer * * Returns the partial ICRC */ static u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb) { - unsigned int bth_offset = 0; - struct iphdr *ip4h = NULL; - struct ipv6hdr *ip6h = NULL; struct udphdr *udph; struct rxe_bth *bth; - int crc; + __be32 crc; int length; int hdr_size = sizeof(struct udphdr) + (skb->protocol == htons(ETH_P_IP) ? @@ -69,6 +91,8 @@ static u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb) crc = 0xdebb20e3; if (skb->protocol == htons(ETH_P_IP)) { /* IPv4 */ + struct iphdr *ip4h = NULL; + memcpy(pshdr, ip_hdr(skb), hdr_size); ip4h = (struct iphdr *)pshdr; udph = (struct udphdr *)(ip4h + 1); @@ -77,6 +101,8 @@ static u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb) ip4h->check = CSUM_MANGLED_0; ip4h->tos = 0xff; } else { /* IPv6 */ + struct ipv6hdr *ip6h = NULL; + memcpy(pshdr, ipv6_hdr(skb), hdr_size); ip6h = (struct ipv6hdr *)pshdr; udph = (struct udphdr *)(ip6h + 1); @@ -85,12 +111,9 @@ static u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb) ip6h->priority = 0xf; ip6h->hop_limit = 0xff; } - udph->check = CSUM_MANGLED_0; - - bth_offset += hdr_size; - memcpy(&pshdr[bth_offset], pkt->hdr, RXE_BTH_BYTES); - bth = (struct rxe_bth *)&pshdr[bth_offset]; + bth = (struct rxe_bth *)(udph + 1); + memcpy(bth, pkt->hdr, RXE_BTH_BYTES); /* exclude bth.resv8a */ bth->qpn |= cpu_to_be32(~BTH_QPN_MASK); @@ -115,18 +138,18 @@ int rxe_icrc_check(struct sk_buff *skb) { struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); __be32 *icrcp; - u32 pkt_icrc; - u32 icrc; + __be32 packet_icrc; + __be32 computed_icrc; icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE); - pkt_icrc = be32_to_cpu(*icrcp); + packet_icrc = *icrcp; - icrc = rxe_icrc_hdr(pkt, skb); - icrc = rxe_crc32(pkt->rxe, icrc, (u8 *)payload_addr(pkt), - payload_size(pkt) + bth_pad(pkt)); - icrc = (__force u32)cpu_to_be32(~icrc); + computed_icrc = rxe_icrc_hdr(pkt, skb); + computed_icrc = rxe_crc32(pkt->rxe, computed_icrc, + (u8 *)payload_addr(pkt), payload_size(pkt) + bth_pad(pkt)); + computed_icrc = ~computed_icrc; - if (unlikely(icrc != pkt_icrc)) { + if (unlikely(computed_icrc != packet_icrc)) { if (skb->protocol == htons(ETH_P_IPV6)) pr_warn_ratelimited("bad ICRC from %pI6c\n", &ipv6_hdr(skb)->saddr); @@ -150,7 +173,7 @@ int rxe_icrc_check(struct sk_buff *skb) void rxe_icrc_generate(struct rxe_pkt_info *pkt, struct sk_buff *skb) { __be32 *icrcp; - u32 icrc; + __be32 icrc; icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE); icrc = rxe_icrc_hdr(pkt, skb); diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index b08689b664ec..f98378f8ff31 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -193,6 +193,7 @@ int rxe_requester(void *arg); int rxe_responder(void *arg); /* rxe_icrc.c */ +int rxe_icrc_init(struct rxe_dev *rxe); int rxe_icrc_check(struct sk_buff *skb); void rxe_icrc_generate(struct rxe_pkt_info *pkt, struct sk_buff *skb); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index c223959ac174..f7b1a1f64c13 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -1154,7 +1154,6 @@ int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name) { int err; struct ib_device *dev = &rxe->ib_dev; - struct crypto_shash *tfm; strscpy(dev->node_desc, "rxe", sizeof(dev->node_desc)); @@ -1173,13 +1172,9 @@ int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name) if (err) return err; - tfm = crypto_alloc_shash("crc32", 0, 0); - if (IS_ERR(tfm)) { - pr_err("failed to allocate crc algorithm err:%ld\n", - PTR_ERR(tfm)); - return PTR_ERR(tfm); - } - rxe->tfm = tfm; + err = rxe_icrc_init(rxe); + if (err) + return err; err = ib_register_device(dev, ibdev_name, NULL); if (err)