From patchwork Tue Jun 29 20:14:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12350785 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C519BC11F67 for ; Tue, 29 Jun 2021 20:14:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ACD5361D2E for ; Tue, 29 Jun 2021 20:14:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233524AbhF2UQv (ORCPT ); Tue, 29 Jun 2021 16:16:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38438 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232689AbhF2UQu (ORCPT ); Tue, 29 Jun 2021 16:16:50 -0400 Received: from mail-oi1-x235.google.com (mail-oi1-x235.google.com [IPv6:2607:f8b0:4864:20::235]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 37572C061760 for ; Tue, 29 Jun 2021 13:14:23 -0700 (PDT) Received: by mail-oi1-x235.google.com with SMTP id 11so217191oid.3 for ; Tue, 29 Jun 2021 13:14:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mQfPuel+Ld/km4ObsU2/PoY2G39293TVnNTnnhYw84s=; b=n6EucDkJJrf7GKe5GCntDi8OjSiKnPV0Dl4EQnJ+NOEYm7fvocjsvcdn8vKXXzsrJQ b0foHRxr43086FD7hI/vmrezIyVI9PKpvE3BS5AfcW6wxVBDhKu4qXZAA3SoQQvdalpC n+Hdx2tB7JpRRKvFQtsnYAgcQZP6U+lf18UrX4pXb17Hb9D9qdxQkU34BPKAi2gRg3fR u6Zj+l7hhHdxvdZhpeWoeQ7QOV+ZqToBAWYwAdD6IN/511YNT3UgiQtE03RfD8A/Ykpm W9Le69WmtBrUc8HFVom1Qw08R6DtSiJofvAKHEUNUNQfvdEkyDyCPBFuNEfI4ygMwYEs K8BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mQfPuel+Ld/km4ObsU2/PoY2G39293TVnNTnnhYw84s=; b=jRNbA67ijSy/HpEzAtI2/LPm2rH664tSB9SH+nveDi5yvjYUvzYT7ZOqALC6Zvf2O4 Owxu2NuUaWcx9EDzHJEGkHLgb36PaEJLz7I30pQHoHhjinMx3bJYfAwCE2Rxs2pW1rE2 p7LNLQuOrrfsmF5w1D9dxcACdqeRFdLsrRtJqU7hrFThS3hGyoMkTKznVYRU4nnv8Bjy W3lAPVlwmO/SsLhxF2kz3I8eBNVeagq68iWWj4dcc5I79MJwSdMrtChZ2xlzHZhb0fTR NsjHPyoUjJOKqwFD9R7MlY0aOl0mGJqsZTbuOgHDXDeshIuYH8gt88UG6YNSOULvOCsR p4zg== X-Gm-Message-State: AOAM5310G342MNnEuzC5oQH86R8XF/fGxjHdHhn8O0mFPkMR3owUlREl Zn2iHyahiP0PeRDn6eKI0L4= X-Google-Smtp-Source: ABdhPJzZoaiC1qbsP5g90zK6KrVEYXV5Pe7MOMtnWVqAko2myvHVNLK4fdDJGuEVzJ+6zr2dspSVCg== X-Received: by 2002:aca:ba06:: with SMTP id k6mr23006298oif.70.1624997662623; Tue, 29 Jun 2021 13:14:22 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-aaa9-75eb-6e0f-9f85.res6.spectrum.com. [2603:8081:140c:1a00:aaa9:75eb:6e0f:9f85]) by smtp.gmail.com with ESMTPSA id q206sm1562504oic.20.2021.06.29.13.14.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Jun 2021 13:14:22 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH 1/5] RDMA/rxe: Move ICRC checking to a subroutine Date: Tue, 29 Jun 2021 15:14:06 -0500 Message-Id: <20210629201412.28306-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210629201412.28306-1-rpearsonhpe@gmail.com> References: <20210629201412.28306-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move the code in rxe_recv that checks the ICRC on incoming packets to a subroutine rxe_check_icrc() and move it to rxe_icrc.c. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_icrc.c | 38 ++++++++++++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_loc.h | 2 ++ drivers/infiniband/sw/rxe/rxe_recv.c | 23 ++--------------- 3 files changed, 42 insertions(+), 21 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_icrc.c b/drivers/infiniband/sw/rxe/rxe_icrc.c index 66b2aad54bb7..5193dfa94a75 100644 --- a/drivers/infiniband/sw/rxe/rxe_icrc.c +++ b/drivers/infiniband/sw/rxe/rxe_icrc.c @@ -67,3 +67,41 @@ u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb) rxe_opcode[pkt->opcode].length - RXE_BTH_BYTES); return crc; } + +/** + * rxe_icrc_check - Compute ICRC for a packet and compare to the ICRC + * delivered in the packet. + * @skb: The packet buffer with packet info in skb->cb[] (receive path) + * + * Returns 0 on success or an error on failure + */ +int rxe_icrc_check(struct sk_buff *skb) +{ + struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); + __be32 *icrcp; + u32 pkt_icrc; + u32 icrc; + + icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE); + pkt_icrc = be32_to_cpu(*icrcp); + + icrc = rxe_icrc_hdr(pkt, skb); + icrc = rxe_crc32(pkt->rxe, icrc, (u8 *)payload_addr(pkt), + payload_size(pkt) + bth_pad(pkt)); + icrc = (__force u32)cpu_to_be32(~icrc); + + if (unlikely(icrc != pkt_icrc)) { + if (skb->protocol == htons(ETH_P_IPV6)) + pr_warn_ratelimited("bad ICRC from %pI6c\n", + &ipv6_hdr(skb)->saddr); + else if (skb->protocol == htons(ETH_P_IP)) + pr_warn_ratelimited("bad ICRC from %pI4\n", + &ip_hdr(skb)->saddr); + else + pr_warn_ratelimited("bad ICRC from unknown\n"); + + return -EINVAL; + } + + return 0; +} diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 1ddb20855dee..6689e51647db 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -193,7 +193,9 @@ int rxe_completer(void *arg); int rxe_requester(void *arg); int rxe_responder(void *arg); +/* rxe_icrc.c */ u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb); +int rxe_icrc_check(struct sk_buff *skb); void rxe_resp_queue_pkt(struct rxe_qp *qp, struct sk_buff *skb); diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 7a49e27da23a..8582b3163e2c 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -361,8 +361,6 @@ void rxe_rcv(struct sk_buff *skb) int err; struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); struct rxe_dev *rxe = pkt->rxe; - __be32 *icrcp; - u32 calc_icrc, pack_icrc; if (unlikely(skb->len < RXE_BTH_BYTES)) goto drop; @@ -384,26 +382,9 @@ void rxe_rcv(struct sk_buff *skb) if (unlikely(err)) goto drop; - /* Verify ICRC */ - icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE); - pack_icrc = be32_to_cpu(*icrcp); - - calc_icrc = rxe_icrc_hdr(pkt, skb); - calc_icrc = rxe_crc32(rxe, calc_icrc, (u8 *)payload_addr(pkt), - payload_size(pkt) + bth_pad(pkt)); - calc_icrc = (__force u32)cpu_to_be32(~calc_icrc); - if (unlikely(calc_icrc != pack_icrc)) { - if (skb->protocol == htons(ETH_P_IPV6)) - pr_warn_ratelimited("bad ICRC from %pI6c\n", - &ipv6_hdr(skb)->saddr); - else if (skb->protocol == htons(ETH_P_IP)) - pr_warn_ratelimited("bad ICRC from %pI4\n", - &ip_hdr(skb)->saddr); - else - pr_warn_ratelimited("bad ICRC from unknown\n"); - + err = rxe_icrc_check(skb); + if (unlikely(err)) goto drop; - } rxe_counter_inc(rxe, RXE_CNT_RCVD_PKTS); From patchwork Tue Jun 29 20:14:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12350787 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC234C11F68 for ; Tue, 29 Jun 2021 20:14:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8B79461D3C for ; Tue, 29 Jun 2021 20:14:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234953AbhF2UQw (ORCPT ); Tue, 29 Jun 2021 16:16:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38444 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232689AbhF2UQw (ORCPT ); Tue, 29 Jun 2021 16:16:52 -0400 Received: from mail-oo1-xc2d.google.com (mail-oo1-xc2d.google.com [IPv6:2607:f8b0:4864:20::c2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1234FC061760 for ; Tue, 29 Jun 2021 13:14:24 -0700 (PDT) Received: by mail-oo1-xc2d.google.com with SMTP id bc18-20020a0568201692b029024c6dbc2073so2936008oob.8 for ; Tue, 29 Jun 2021 13:14:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FURDBWmdNtUGjZ1Z7RMoVulI+31O7fZVecU66GbrFns=; b=GsmISUtCxIG9cv5gaLBzityGewszWch3Byq3m5Mo3wpG+dD5bj5Sg7sETQ6WBFwTw8 pnC7lrAvYFz+OwNlOEey3NpGluUKRpMFjQFAN1D3iFcvvqLKtfLsBnQsIDvqtn7k+z2I L0eo1AdCIkwjUA/7HQCGqL1PEDNwKTZOquzFGiSnj8zOc0mNaYwRitdW0iZaslQ2GOiU ROJjv4c4TIpvC3fMwnNRqjMAbLO4d0koGmO9H7MzAw8VwT0sixBVCaY/uviT5crawJmw jveqzfC92XfU7NjtgWVjAvkxmpTJDUEx+O6fxWddRyZo0d9orTcAewiJSeeDPiA0/VFR YGTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FURDBWmdNtUGjZ1Z7RMoVulI+31O7fZVecU66GbrFns=; b=ix5/45dwEHoZQPtl+bhfeA4pwtV7S/DCkYDxKY0k5zZ1QxMIzaMQz9UevqPvzmDqnY dck6AgCTSAVhn8unuNTIujbzqZ+/Avc85yE3u1BYzOfjrCWIZe19iPg8drneUDRuyWJW mlW5JqD8lYa41EHeO09wEBacrelP3oiqVgUilJ68HVhaqiWVcjFitN7uh3bnP7NdYs9r 1vDQxuvPGQxDuMtmMFhS8R54Xmm+bt6d/aNLXRTq4N8QUGYs7GZKak9HVRyu4uxwNdSN 91ciwbnO/dNWK1R0R9HFuWzp3hcyMAJ2glFXvqbwEYALfHStUjgpDRaXGuWPn273Ysnl +30g== X-Gm-Message-State: AOAM530Ra7In6qGJjLvSH7J2Z1VRvJA1R60qVsrNb3tsMweTACfnnPY0 8A9KeG68EBK8DMu6JEDX4W8= X-Google-Smtp-Source: ABdhPJxzytezm1GDxkwTkU2oQxNntpAtGWGqusJCHnqs1h3gQLTlmw+4E33YxLcqSWkmlVPC8BpS+A== X-Received: by 2002:a4a:b443:: with SMTP id h3mr5551719ooo.24.1624997663460; Tue, 29 Jun 2021 13:14:23 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-aaa9-75eb-6e0f-9f85.res6.spectrum.com. [2603:8081:140c:1a00:aaa9:75eb:6e0f:9f85]) by smtp.gmail.com with ESMTPSA id z6sm4103618oiz.39.2021.06.29.13.14.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Jun 2021 13:14:23 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH 2/5] RDMA/rxe: Move rxe_xmit_packet to a subroutine Date: Tue, 29 Jun 2021 15:14:07 -0500 Message-Id: <20210629201412.28306-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210629201412.28306-1-rpearsonhpe@gmail.com> References: <20210629201412.28306-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org rxe_xmit_packet() was an overlong inline subroutine. Move it into rxe_net.c as an ordinary subroutine. Change rxe_loopback() and rxe_send() to static subroutines since they are no longer shared. Allow rxe_loopback to return an error. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 47 ++----------------------- drivers/infiniband/sw/rxe/rxe_net.c | 54 ++++++++++++++++++++++++++--- 2 files changed, 51 insertions(+), 50 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 6689e51647db..3468a61efe4e 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -99,11 +99,11 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey); void rxe_mw_cleanup(struct rxe_pool_entry *arg); /* rxe_net.c */ -void rxe_loopback(struct sk_buff *skb); -int rxe_send(struct rxe_pkt_info *pkt, struct sk_buff *skb); struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, int paylen, struct rxe_pkt_info *pkt); int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb, u32 *crc); +int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, + struct sk_buff *skb); const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num); int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid); int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid); @@ -206,47 +206,4 @@ static inline unsigned int wr_opcode_mask(int opcode, struct rxe_qp *qp) return rxe_wr_opcode_info[opcode].mask[qp->ibqp.qp_type]; } -static inline int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, - struct sk_buff *skb) -{ - int err; - int is_request = pkt->mask & RXE_REQ_MASK; - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); - - if ((is_request && (qp->req.state != QP_STATE_READY)) || - (!is_request && (qp->resp.state != QP_STATE_READY))) { - pr_info("Packet dropped. QP is not in ready state\n"); - goto drop; - } - - if (pkt->mask & RXE_LOOPBACK_MASK) { - memcpy(SKB_TO_PKT(skb), pkt, sizeof(*pkt)); - rxe_loopback(skb); - err = 0; - } else { - err = rxe_send(pkt, skb); - } - - if (err) { - rxe->xmit_errors++; - rxe_counter_inc(rxe, RXE_CNT_SEND_ERR); - return err; - } - - if ((qp_type(qp) != IB_QPT_RC) && - (pkt->mask & RXE_END_MASK)) { - pkt->wqe->state = wqe_state_done; - rxe_run_task(&qp->comp.task, 1); - } - - rxe_counter_inc(rxe, RXE_CNT_SENT_PKTS); - goto done; - -drop: - kfree_skb(skb); - err = 0; -done: - return err; -} - #endif /* RXE_LOC_H */ diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index dec92928a1cd..6968c247bcf7 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -373,7 +373,7 @@ static void rxe_skb_tx_dtor(struct sk_buff *skb) rxe_drop_ref(qp); } -int rxe_send(struct rxe_pkt_info *pkt, struct sk_buff *skb) +static int rxe_send(struct rxe_pkt_info *pkt, struct sk_buff *skb) { int err; @@ -406,19 +406,63 @@ int rxe_send(struct rxe_pkt_info *pkt, struct sk_buff *skb) /* fix up a send packet to match the packets * received from UDP before looping them back */ -void rxe_loopback(struct sk_buff *skb) +static int rxe_loopback(struct rxe_pkt_info *pkt, struct sk_buff *skb) { - struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); + memcpy(SKB_TO_PKT(skb), pkt, sizeof(*pkt)); if (skb->protocol == htons(ETH_P_IP)) skb_pull(skb, sizeof(struct iphdr)); else skb_pull(skb, sizeof(struct ipv6hdr)); - if (WARN_ON(!ib_device_try_get(&pkt->rxe->ib_dev))) + if (WARN_ON(!ib_device_try_get(&pkt->rxe->ib_dev))) { kfree_skb(skb); + return -EIO; + } + + rxe_rcv(skb); + + return 0; +} + +int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, + struct sk_buff *skb) +{ + int is_request = pkt->mask & RXE_REQ_MASK; + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + int err; + + if ((is_request && (qp->req.state != QP_STATE_READY)) || + (!is_request && (qp->resp.state != QP_STATE_READY))) { + pr_info("Packet dropped. QP is not in ready state\n"); + goto drop; + } + + if (pkt->mask & RXE_LOOPBACK_MASK) + err = rxe_loopback(pkt, skb); else - rxe_rcv(skb); + err = rxe_send(pkt, skb); + + if (err) { + rxe->xmit_errors++; + rxe_counter_inc(rxe, RXE_CNT_SEND_ERR); + return err; + } + + if ((qp_type(qp) != IB_QPT_RC) && + (pkt->mask & RXE_END_MASK)) { + pkt->wqe->state = wqe_state_done; + rxe_run_task(&qp->comp.task, 1); + } + + rxe_counter_inc(rxe, RXE_CNT_SENT_PKTS); + goto done; + +drop: + kfree_skb(skb); + err = 0; +done: + return err; } struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, From patchwork Tue Jun 29 20:14:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12350789 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9B6AC11F69 for ; Tue, 29 Jun 2021 20:14:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BAE4461358 for ; Tue, 29 Jun 2021 20:14:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235201AbhF2UQy (ORCPT ); Tue, 29 Jun 2021 16:16:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38452 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232689AbhF2UQx (ORCPT ); Tue, 29 Jun 2021 16:16:53 -0400 Received: from mail-oi1-x22f.google.com (mail-oi1-x22f.google.com [IPv6:2607:f8b0:4864:20::22f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D44FAC061766 for ; Tue, 29 Jun 2021 13:14:24 -0700 (PDT) Received: by mail-oi1-x22f.google.com with SMTP id s17so271888oij.0 for ; Tue, 29 Jun 2021 13:14:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1ZP8OQV1IW6mwxiCTa9PrhgGhzRq3ovPPPsyQVW7nmQ=; b=idZdwXldCOhfip19zWTPoqKVlD6MJ5ckeeOjhrwPiWFRjEu3rL4h3F8tT8TXocPHZk VdR9a6HNUROr1xpua9HfZOcwdSIRwyyf7Q500haxyeP8ZxuxvJZkkWJ3+kmQ2CRrEBTz MhlL5Ej9yACkAvwAqvrkYlWYz03wOIU2t+s976BN+8xTpKwZBooGvzwnZMzCPqx3T2Rj gLFOWiBjtv7kEvaHT7IJjwZUrSpXptVY5XZZD3sKbQU6BnHCYrn8V14/AyLzt6NsLCUz ZI9rubIN5d+ciTm9qFtn8sitExIz/0tAsVlnUI76/U7TECtTv4wM79YcOYeXRzxu5aGM NznQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1ZP8OQV1IW6mwxiCTa9PrhgGhzRq3ovPPPsyQVW7nmQ=; b=aP/pXh44djjXdKZDQMtGKlphczXweXlKJME7uNJbSTPhrynbfGCjb2JwyfMKgLULWu LbBh2qJNJ0G0Ff9dvo8Y4wWJE+SA5xd1FMlCHZNtpCoYLrgjvlutZalcxHh/FJwtHpQ4 fn7G+aCtA54AOMQJSNnmfzoP452OIDj63of8zTHt/JAOZfmuhd9MVnyU8yLhLZWgnnek WVEoF4sD94/ue+Ii0DIeW5nDdgCqCBGOyAVreOWp0HpH7chIDbK5Z/hJ1dr/kA77gNXp aBZXZnKi/vV8862DZwKhbLGJ7JRD0ac3MwXSTaQR/bO+W4y+3ayDYSk5yq1G92sRDzB5 1Bbg== X-Gm-Message-State: AOAM531Dj3AzabblCx10i3Ggk12rdlb7qi0sc6HdrUlIBiBoaGEgIC8Y DPdYyQdUp7tdcH+Vc0DZYTE= X-Google-Smtp-Source: ABdhPJxYdl8c9L4sBGLngrsdoqGszJiOywtuinDBj8iGLvL2bxYe6rkTzgL+p/lv08yO1hGnIbzv5A== X-Received: by 2002:a54:4514:: with SMTP id l20mr23183619oil.1.1624997664206; Tue, 29 Jun 2021 13:14:24 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-aaa9-75eb-6e0f-9f85.res6.spectrum.com. [2603:8081:140c:1a00:aaa9:75eb:6e0f:9f85]) by smtp.gmail.com with ESMTPSA id k14sm2547284oon.5.2021.06.29.13.14.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Jun 2021 13:14:23 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH 3/5] RDMA/rxe: Move ICRC generation to a subroutine Date: Tue, 29 Jun 2021 15:14:08 -0500 Message-Id: <20210629201412.28306-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210629201412.28306-1-rpearsonhpe@gmail.com> References: <20210629201412.28306-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Isolate ICRC generation into a single subroutine named rxe_generate_icrc() in rxe_icrc.c. Remove scattered crc generation code from elsewhere. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 4 +-- drivers/infiniband/sw/rxe/rxe_icrc.c | 13 ++++++++ drivers/infiniband/sw/rxe/rxe_loc.h | 10 +++--- drivers/infiniband/sw/rxe/rxe_mr.c | 47 ++++++++++++---------------- drivers/infiniband/sw/rxe/rxe_net.c | 8 ++--- drivers/infiniband/sw/rxe/rxe_req.c | 13 ++------ drivers/infiniband/sw/rxe/rxe_resp.c | 33 +++++-------------- 7 files changed, 54 insertions(+), 74 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index 58ad9c2644f3..d2d802c776fd 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -349,7 +349,7 @@ static inline enum comp_state do_read(struct rxe_qp *qp, ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &wqe->dma, payload_addr(pkt), - payload_size(pkt), RXE_TO_MR_OBJ, NULL); + payload_size(pkt), RXE_TO_MR_OBJ); if (ret) { wqe->status = IB_WC_LOC_PROT_ERR; return COMPST_ERROR; @@ -371,7 +371,7 @@ static inline enum comp_state do_atomic(struct rxe_qp *qp, ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &wqe->dma, &atomic_orig, - sizeof(u64), RXE_TO_MR_OBJ, NULL); + sizeof(u64), RXE_TO_MR_OBJ); if (ret) { wqe->status = IB_WC_LOC_PROT_ERR; return COMPST_ERROR; diff --git a/drivers/infiniband/sw/rxe/rxe_icrc.c b/drivers/infiniband/sw/rxe/rxe_icrc.c index 5193dfa94a75..5424b8bea908 100644 --- a/drivers/infiniband/sw/rxe/rxe_icrc.c +++ b/drivers/infiniband/sw/rxe/rxe_icrc.c @@ -105,3 +105,16 @@ int rxe_icrc_check(struct sk_buff *skb) return 0; } + +/* rxe_icrc_generate- compute ICRC for a packet. */ +void rxe_icrc_generate(struct rxe_pkt_info *pkt, struct sk_buff *skb) +{ + __be32 *icrcp; + u32 icrc; + + icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE); + icrc = rxe_icrc_hdr(pkt, skb); + icrc = rxe_crc32(pkt->rxe, icrc, (u8 *)payload_addr(pkt), + payload_size(pkt) + bth_pad(pkt)); + *icrcp = ~icrc; +} diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 3468a61efe4e..2c724b9970d6 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -77,10 +77,9 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova, int access, struct rxe_mr *mr); int rxe_mr_init_fast(struct rxe_pd *pd, int max_pages, struct rxe_mr *mr); int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, - enum rxe_mr_copy_dir dir, u32 *crcp); -int copy_data(struct rxe_pd *pd, int access, - struct rxe_dma_info *dma, void *addr, int length, - enum rxe_mr_copy_dir dir, u32 *crcp); + enum rxe_mr_copy_dir dir); +int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, + void *addr, int length, enum rxe_mr_copy_dir dir); void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length); struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type); @@ -101,7 +100,7 @@ void rxe_mw_cleanup(struct rxe_pool_entry *arg); /* rxe_net.c */ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, int paylen, struct rxe_pkt_info *pkt); -int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb, u32 *crc); +int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb); int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct sk_buff *skb); const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num); @@ -196,6 +195,7 @@ int rxe_responder(void *arg); /* rxe_icrc.c */ u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb); int rxe_icrc_check(struct sk_buff *skb); +void rxe_icrc_generate(struct rxe_pkt_info *pkt, struct sk_buff *skb); void rxe_resp_queue_pkt(struct rxe_qp *qp, struct sk_buff *skb); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 6aabcb4de235..f94fd143e27b 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -279,11 +279,10 @@ void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length) } /* copy data from a range (vaddr, vaddr+length-1) to or from - * a mr object starting at iova. Compute incremental value of - * crc32 if crcp is not zero. caller must hold a reference to mr + * a mr object starting at iova. */ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, - enum rxe_mr_copy_dir dir, u32 *crcp) + enum rxe_mr_copy_dir dir) { int err; int bytes; @@ -293,24 +292,23 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, int m; int i; size_t offset; - u32 crc = crcp ? (*crcp) : 0; + u8 *src; + u8 *dest; if (length == 0) return 0; if (mr->type == RXE_MR_TYPE_DMA) { - u8 *src, *dest; - - src = (dir == RXE_TO_MR_OBJ) ? addr : ((void *)(uintptr_t)iova); - - dest = (dir == RXE_TO_MR_OBJ) ? ((void *)(uintptr_t)iova) : addr; + if (dir == RXE_TO_MR_OBJ) { + src = addr; + dest = ((void *)(uintptr_t)iova); + } else { + src = ((void *)(uintptr_t)iova); + dest = addr; + } memcpy(dest, src, length); - if (crcp) - *crcp = rxe_crc32(to_rdev(mr->ibmr.device), *crcp, dest, - length); - return 0; } @@ -328,11 +326,14 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, buf = map[0]->buf + i; while (length > 0) { - u8 *src, *dest; - va = (u8 *)(uintptr_t)buf->addr + offset; - src = (dir == RXE_TO_MR_OBJ) ? addr : va; - dest = (dir == RXE_TO_MR_OBJ) ? va : addr; + if (dir == RXE_TO_MR_OBJ) { + src = addr; + dest = va; + } else { + src = va; + dest = addr; + } bytes = buf->size - offset; @@ -341,10 +342,6 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, memcpy(dest, src, bytes); - if (crcp) - crc = rxe_crc32(to_rdev(mr->ibmr.device), crc, dest, - bytes); - length -= bytes; addr += bytes; @@ -359,9 +356,6 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, } } - if (crcp) - *crcp = crc; - return 0; err1: @@ -377,8 +371,7 @@ int copy_data( struct rxe_dma_info *dma, void *addr, int length, - enum rxe_mr_copy_dir dir, - u32 *crcp) + enum rxe_mr_copy_dir dir) { int bytes; struct rxe_sge *sge = &dma->sge[dma->cur_sge]; @@ -439,7 +432,7 @@ int copy_data( if (bytes > 0) { iova = sge->addr + offset; - err = rxe_mr_copy(mr, iova, addr, bytes, dir, crcp); + err = rxe_mr_copy(mr, iova, addr, bytes, dir); if (err) goto err2; diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index 6968c247bcf7..ffbe8f95405e 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -343,7 +343,7 @@ static int prepare6(struct rxe_pkt_info *pkt, struct sk_buff *skb) return 0; } -int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb, u32 *crc) +int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb) { int err = 0; @@ -352,8 +352,6 @@ int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb, u32 *crc) else if (skb->protocol == htons(ETH_P_IPV6)) err = prepare6(pkt, skb); - *crc = rxe_icrc_hdr(pkt, skb); - if (ether_addr_equal(skb->dev->dev_addr, rxe_get_av(pkt)->dmac)) pkt->mask |= RXE_LOOPBACK_MASK; @@ -396,7 +394,7 @@ static int rxe_send(struct rxe_pkt_info *pkt, struct sk_buff *skb) } if (unlikely(net_xmit_eval(err))) { - pr_debug("error sending packet: %d\n", err); + pr_info("error sending packet: %d\n", err); return -EAGAIN; } @@ -438,6 +436,8 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, goto drop; } + rxe_icrc_generate(pkt, skb); + if (pkt->mask & RXE_LOOPBACK_MASK) err = rxe_loopback(pkt, skb); else diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index c57699cc6578..3894197a82f6 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -466,12 +466,9 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe, struct rxe_pkt_info *pkt, struct sk_buff *skb, int paylen) { - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); - u32 crc = 0; - u32 *p; int err; - err = rxe_prepare(pkt, skb, &crc); + err = rxe_prepare(pkt, skb); if (err) return err; @@ -479,7 +476,6 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe, if (wqe->wr.send_flags & IB_SEND_INLINE) { u8 *tmp = &wqe->dma.inline_data[wqe->dma.sge_offset]; - crc = rxe_crc32(rxe, crc, tmp, paylen); memcpy(payload_addr(pkt), tmp, paylen); wqe->dma.resid -= paylen; @@ -487,8 +483,7 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe, } else { err = copy_data(qp->pd, 0, &wqe->dma, payload_addr(pkt), paylen, - RXE_FROM_MR_OBJ, - &crc); + RXE_FROM_MR_OBJ); if (err) return err; } @@ -496,12 +491,8 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe, u8 *pad = payload_addr(pkt) + paylen; memset(pad, 0, bth_pad(pkt)); - crc = rxe_crc32(rxe, crc, pad, bth_pad(pkt)); } } - p = payload_addr(pkt) + paylen + bth_pad(pkt); - - *p = ~crc; return 0; } diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 3743dc39b60c..685b8aebd627 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -536,7 +536,7 @@ static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, int err; err = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, - data_addr, data_len, RXE_TO_MR_OBJ, NULL); + data_addr, data_len, RXE_TO_MR_OBJ); if (unlikely(err)) return (err == -ENOSPC) ? RESPST_ERR_LENGTH : RESPST_ERR_MALFORMED_WQE; @@ -552,7 +552,7 @@ static enum resp_states write_data_in(struct rxe_qp *qp, int data_len = payload_size(pkt); err = rxe_mr_copy(qp->resp.mr, qp->resp.va + qp->resp.offset, - payload_addr(pkt), data_len, RXE_TO_MR_OBJ, NULL); + payload_addr(pkt), data_len, RXE_TO_MR_OBJ); if (err) { rc = RESPST_ERR_RKEY_VIOLATION; goto out; @@ -613,13 +613,10 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, int opcode, int payload, u32 psn, - u8 syndrome, - u32 *crcp) + u8 syndrome) { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct sk_buff *skb; - u32 crc = 0; - u32 *p; int paylen; int pad; int err; @@ -651,20 +648,12 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, if (ack->mask & RXE_ATMACK_MASK) atmack_set_orig(ack, qp->resp.atomic_orig); - err = rxe_prepare(ack, skb, &crc); + err = rxe_prepare(ack, skb); if (err) { kfree_skb(skb); return NULL; } - if (crcp) { - /* CRC computation will be continued by the caller */ - *crcp = crc; - } else { - p = payload_addr(ack) + payload + bth_pad(ack); - *p = ~crc; - } - return skb; } @@ -682,8 +671,6 @@ static enum resp_states read_reply(struct rxe_qp *qp, int opcode; int err; struct resp_res *res = qp->resp.res; - u32 icrc; - u32 *p; if (!res) { /* This is the first time we process that request. Get a @@ -742,24 +729,20 @@ static enum resp_states read_reply(struct rxe_qp *qp, payload = min_t(int, res->read.resid, mtu); skb = prepare_ack_packet(qp, req_pkt, &ack_pkt, opcode, payload, - res->cur_psn, AETH_ACK_UNLIMITED, &icrc); + res->cur_psn, AETH_ACK_UNLIMITED); if (!skb) return RESPST_ERR_RNR; err = rxe_mr_copy(res->read.mr, res->read.va, payload_addr(&ack_pkt), - payload, RXE_FROM_MR_OBJ, &icrc); + payload, RXE_FROM_MR_OBJ); if (err) pr_err("Failed copying memory\n"); if (bth_pad(&ack_pkt)) { - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); u8 *pad = payload_addr(&ack_pkt) + payload; memset(pad, 0, bth_pad(&ack_pkt)); - icrc = rxe_crc32(rxe, icrc, pad, bth_pad(&ack_pkt)); } - p = payload_addr(&ack_pkt) + payload + bth_pad(&ack_pkt); - *p = ~icrc; err = rxe_xmit_packet(qp, &ack_pkt, skb); if (err) { @@ -984,7 +967,7 @@ static int send_ack(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct sk_buff *skb; skb = prepare_ack_packet(qp, pkt, &ack_pkt, IB_OPCODE_RC_ACKNOWLEDGE, - 0, psn, syndrome, NULL); + 0, psn, syndrome); if (!skb) { err = -ENOMEM; goto err1; @@ -1008,7 +991,7 @@ static int send_atomic_ack(struct rxe_qp *qp, struct rxe_pkt_info *pkt, skb = prepare_ack_packet(qp, pkt, &ack_pkt, IB_OPCODE_RC_ATOMIC_ACKNOWLEDGE, 0, pkt->psn, - syndrome, NULL); + syndrome); if (!skb) { rc = -ENOMEM; goto out; From patchwork Tue Jun 29 20:14:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12350791 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5CA48C11F6A for ; Tue, 29 Jun 2021 20:14:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3EBF561358 for ; Tue, 29 Jun 2021 20:14:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232689AbhF2UQy (ORCPT ); Tue, 29 Jun 2021 16:16:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38456 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235199AbhF2UQy (ORCPT ); Tue, 29 Jun 2021 16:16:54 -0400 Received: from mail-ot1-x336.google.com (mail-ot1-x336.google.com [IPv6:2607:f8b0:4864:20::336]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E430C061760 for ; Tue, 29 Jun 2021 13:14:25 -0700 (PDT) Received: by mail-ot1-x336.google.com with SMTP id d21-20020a9d72d50000b02904604cda7e66so53704otk.7 for ; Tue, 29 Jun 2021 13:14:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ts0M4A+kaV/PqWUKIrOiU8P0d0m2308UEHxNlk5yfDU=; b=dRkCKCGpC+89LX5cCvYby+k5SG85L46x3zLifGfJBJXl5YB42433Z9LBdQB/L12qUV N/NCVt+ha/bfzpCngdkNPd+AIpGC4jOg4ODka6DnfoyL4DiRrkfJtxjMp2h74uWWO4HH v0JYYmTObqhRYqIXDdCqFrmmG1KUIdkonuvl/wua4D41MbM6URelD9DSCMlw0Sk2UkxL Xdq28mWOifpKLsgglj1Wj/6ILMfCGUkjy9aIdWruldYkwvIbwgGr3NCBilMpKr+0qUnv N4caO9dEdzvgmfk16yPyYV3zjbeGPyU02viv72bhIiCIMxLiUWMP4FT4WlC4wrPN0Jmc gqdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ts0M4A+kaV/PqWUKIrOiU8P0d0m2308UEHxNlk5yfDU=; b=dDhkjnuCj9LrC4xLheuelIAxVphl4CAxG7laG/6lOLWOTs+BJ6mkNlSN23pgufnHTg j0b2kSnUR6KAiou6Dtnou7AHV9PG4AR5ZhOclyWmNIDYQre9RIFiXI1k3nh6D8icRa2S kkQ5h5hJWQbQ6iKOlptxmCVQILq/s43LoPRpM0K0eLzxql7BV1+MDzF09flOJG3kQTGg TzP2eOp9ow7BWfLBTwTZLEqWcnLCW+X6q/Dqkk50G290Y4o6wSWSkchV/tPN7qKcUkxR /krJky0KbzoiuYZlrnvUK+PgICIQ8+JmbdU8hzrJsYzmF+Js3iFhzdZ5OBkeia7ck1aK u85A== X-Gm-Message-State: AOAM5309tOcCfn1Vg1kvPEiH6gUcipLspTx9JpupvgVYLB+biQ1B8r6n X6U8S9xXjgAdhMH1uendAqc= X-Google-Smtp-Source: ABdhPJxAkFRDJww2s7N4mfBCC4srrvmKXdaCGIFjOmFx/3Wjov1HWUZ0a/LWB5D2QJgfy0qXlFXusA== X-Received: by 2002:a05:6830:48:: with SMTP id d8mr6286257otp.122.1624997664997; Tue, 29 Jun 2021 13:14:24 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-aaa9-75eb-6e0f-9f85.res6.spectrum.com. [2603:8081:140c:1a00:aaa9:75eb:6e0f:9f85]) by smtp.gmail.com with ESMTPSA id x29sm2697512ooj.10.2021.06.29.13.14.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Jun 2021 13:14:24 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH 4/5] RDMA/rxe: Move rxe_crc32 to a subroutine Date: Tue, 29 Jun 2021 15:14:09 -0500 Message-Id: <20210629201412.28306-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210629201412.28306-1-rpearsonhpe@gmail.com> References: <20210629201412.28306-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move rxe_crc32 from rxe.h to rxe_icrc.c as a static local function. Add some comments to rxe_icrc.c Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.h | 21 ------------ drivers/infiniband/sw/rxe/rxe_icrc.c | 50 +++++++++++++++++++++++++--- drivers/infiniband/sw/rxe/rxe_loc.h | 1 - 3 files changed, 45 insertions(+), 27 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.h b/drivers/infiniband/sw/rxe/rxe.h index 623fd17df02d..65a73c1c8b35 100644 --- a/drivers/infiniband/sw/rxe/rxe.h +++ b/drivers/infiniband/sw/rxe/rxe.h @@ -42,27 +42,6 @@ extern bool rxe_initialized; -static inline u32 rxe_crc32(struct rxe_dev *rxe, - u32 crc, void *next, size_t len) -{ - u32 retval; - int err; - - SHASH_DESC_ON_STACK(shash, rxe->tfm); - - shash->tfm = rxe->tfm; - *(u32 *)shash_desc_ctx(shash) = crc; - err = crypto_shash_update(shash, next, len); - if (unlikely(err)) { - pr_warn_ratelimited("failed crc calculation, err: %d\n", err); - return crc32_le(crc, next, len); - } - - retval = *(u32 *)shash_desc_ctx(shash); - barrier_data(shash_desc_ctx(shash)); - return retval; -} - void rxe_set_mtu(struct rxe_dev *rxe, unsigned int dev_mtu); int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name); diff --git a/drivers/infiniband/sw/rxe/rxe_icrc.c b/drivers/infiniband/sw/rxe/rxe_icrc.c index 5424b8bea908..e116c63d7b84 100644 --- a/drivers/infiniband/sw/rxe/rxe_icrc.c +++ b/drivers/infiniband/sw/rxe/rxe_icrc.c @@ -7,8 +7,44 @@ #include "rxe.h" #include "rxe_loc.h" -/* Compute a partial ICRC for all the IB transport headers. */ -u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb) +/** + * rxe_crc32 - Compute incremental crc32 for a contiguous segment + * @rxe: rdma_rxe device object + * @crc: starting crc32 value from previous segments + * @addr: starting address of segment + * @len: length of the segment in bytes + * + * Returns the crc32 checksum of the segment starting from crc. + */ +static u32 rxe_crc32(struct rxe_dev *rxe, u32 crc, void *addr, size_t len) +{ + u32 icrc; + int err; + + SHASH_DESC_ON_STACK(shash, rxe->tfm); + + shash->tfm = rxe->tfm; + *(u32 *)shash_desc_ctx(shash) = crc; + err = crypto_shash_update(shash, addr, len); + if (unlikely(err)) { + pr_warn_ratelimited("failed crc calculation, err: %d\n", err); + return crc32_le(crc, addr, len); + } + + icrc = *(u32 *)shash_desc_ctx(shash); + barrier_data(shash_desc_ctx(shash)); + + return icrc; +} + +/** + * rxe_icrc_hdr - Compute a partial ICRC for the IB transport headers. + * @pkt: Information about the current packet + * @skb: The packet buffer + * + * Returns the partial ICRC + */ +static u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb) { unsigned int bth_offset = 0; struct iphdr *ip4h = NULL; @@ -71,9 +107,9 @@ u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb) /** * rxe_icrc_check - Compute ICRC for a packet and compare to the ICRC * delivered in the packet. - * @skb: The packet buffer with packet info in skb->cb[] (receive path) + * @skb: packet buffer with packet info in skb->cb[] (receive path) * - * Returns 0 on success or an error on failure + * Returns 0 if the ICRCs match or an error on failure */ int rxe_icrc_check(struct sk_buff *skb) { @@ -106,7 +142,11 @@ int rxe_icrc_check(struct sk_buff *skb) return 0; } -/* rxe_icrc_generate- compute ICRC for a packet. */ +/** + * rxe_icrc_generate - Compute ICRC for a packet. + * @pkt: packet information + * @skb: packet buffer + */ void rxe_icrc_generate(struct rxe_pkt_info *pkt, struct sk_buff *skb) { __be32 *icrcp; diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 2c724b9970d6..b08689b664ec 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -193,7 +193,6 @@ int rxe_requester(void *arg); int rxe_responder(void *arg); /* rxe_icrc.c */ -u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb); int rxe_icrc_check(struct sk_buff *skb); void rxe_icrc_generate(struct rxe_pkt_info *pkt, struct sk_buff *skb); From patchwork Tue Jun 29 20:14:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12350795 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83AC5C11F67 for ; Tue, 29 Jun 2021 20:14:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6A6A261358 for ; Tue, 29 Jun 2021 20:14:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235181AbhF2UQz (ORCPT ); Tue, 29 Jun 2021 16:16:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235193AbhF2UQy (ORCPT ); Tue, 29 Jun 2021 16:16:54 -0400 Received: from mail-ot1-x32c.google.com (mail-ot1-x32c.google.com [IPv6:2607:f8b0:4864:20::32c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 603BCC061768 for ; Tue, 29 Jun 2021 13:14:26 -0700 (PDT) Received: by mail-ot1-x32c.google.com with SMTP id o17-20020a9d76510000b02903eabfc221a9so128376otl.0 for ; Tue, 29 Jun 2021 13:14:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=grubGUbBuvqyqkCPcMcJ1aRrc0D4euuSFNCNaoiFuoI=; b=F6opUiHbF4oO+DsMyt6tzaXXyVpV2VKzyjvuzKzUPtvsG35GJqGWlgyoD0LRvKUQci gIQCBUvsmY/JtsaCRQ976xYtDrOdyzT0fGbromcdqwavi3pdZtV60QtGqCv2SeCIK7Tj dlPbeSiGX54BaC7y2ayVbvJL+230Ei91ZuBApvEElESB5sDwgHIquMV+lAnlV78Cz3vM K63RBFyLmRr9BruGE1dSoCIHyaxVCSW+ac8N46KovbL7SZauHTAkLV+n6fqyGMr8Hjdw kLRa/ag+pBcUseFjZSOhwqryNDO7ILgdnjPygE5WSThoYcSIiQylKJgNC+Xh+eGhAYD6 my5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=grubGUbBuvqyqkCPcMcJ1aRrc0D4euuSFNCNaoiFuoI=; b=eZUpcRY8z9FzCtmulzxKl1d5xedSxmhEHD8mHuefwZiynhEUts7QHSCZB3pgczcqlW /bFzgCV/ND9kjW/KnYgZrduBUmokwCjQZ7i8h4EEyBiIatDnmOSTdreh2liTp7ZYw+4s +T18R+s1MeGPver2pT+FjXKTrVkuoUWxqDPWfuekYhlUbyU/fE6piDMBaLITVaUsA3R1 wpk7iPu+bnYCA6VrYTnG4T9lMBl2fO5kqfdf0LprHJu4R+e7atSdwOJ+w4twd28qr2AD 2ZDxiaoE4qXIlcau04kWkf+bWM+04s5LfqT0CA4HrLukhPjIJOpzQoFus5CTpcIc59U6 ZJLw== X-Gm-Message-State: AOAM533sbUwxgOfd2nmfmdnuOcP67dr7zAl1Otxswa9QMtbjstEU0vWY eBLppF7yqol4KcpATIQ9j1M= X-Google-Smtp-Source: ABdhPJyuoyfmkPMVjWDJ/eI4jOPSJcvl0yNGXxYPWJSh8UzNDor+ojex2mBYn926F6sLsALsB06REA== X-Received: by 2002:a9d:71dd:: with SMTP id z29mr549739otj.142.1624997665683; Tue, 29 Jun 2021 13:14:25 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-aaa9-75eb-6e0f-9f85.res6.spectrum.com. [2603:8081:140c:1a00:aaa9:75eb:6e0f:9f85]) by smtp.gmail.com with ESMTPSA id z4sm1384619ooa.3.2021.06.29.13.14.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Jun 2021 13:14:25 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH 5/5] RDMA/rxe: Move crc32 init code to rxe_icrc.c Date: Tue, 29 Jun 2021 15:14:10 -0500 Message-Id: <20210629201412.28306-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210629201412.28306-1-rpearsonhpe@gmail.com> References: <20210629201412.28306-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org This patch collects the code from rxe_register_device() that sets up the crc32 calculation into a subroutine rxe_icrc_init() in rxe_icrc.c. This completes collecting all the code specific to computing ICRC into one file with a simple set of APIs. Minor cleanups in rxe_icrc.c to Comments byte order types Signed-off-by: Bob Pearson Reported-by: kernel test robot --- drivers/infiniband/sw/rxe/rxe.h | 1 - drivers/infiniband/sw/rxe/rxe_icrc.c | 75 +++++++++++++++++---------- drivers/infiniband/sw/rxe/rxe_loc.h | 1 + drivers/infiniband/sw/rxe/rxe_verbs.c | 11 ++-- 4 files changed, 53 insertions(+), 35 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.h b/drivers/infiniband/sw/rxe/rxe.h index 65a73c1c8b35..1bb3fb618bf5 100644 --- a/drivers/infiniband/sw/rxe/rxe.h +++ b/drivers/infiniband/sw/rxe/rxe.h @@ -14,7 +14,6 @@ #include #include -#include #include #include diff --git a/drivers/infiniband/sw/rxe/rxe_icrc.c b/drivers/infiniband/sw/rxe/rxe_icrc.c index e116c63d7b84..4f311798d682 100644 --- a/drivers/infiniband/sw/rxe/rxe_icrc.c +++ b/drivers/infiniband/sw/rxe/rxe_icrc.c @@ -4,34 +4,59 @@ * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved. */ +#include #include "rxe.h" #include "rxe_loc.h" /** - * rxe_crc32 - Compute incremental crc32 for a contiguous segment + * rxe_icrc_init - Initialize crypto function for computing crc32 + * @rxe: rdma_rxe device object + * + * Returns 0 on success else an error + */ +int rxe_icrc_init(struct rxe_dev *rxe) +{ + struct crypto_shash *tfm; + + tfm = crypto_alloc_shash("crc32", 0, 0); + if (IS_ERR(tfm)) { + pr_err("failed to init crc32 algorithm err:%ld\n", + PTR_ERR(tfm)); + return PTR_ERR(tfm); + } + + rxe->tfm = tfm; + + return 0; +} + +/** + * rxe_crc32 - Compute cumulative crc32 for a contiguous segment * @rxe: rdma_rxe device object * @crc: starting crc32 value from previous segments * @addr: starting address of segment * @len: length of the segment in bytes * - * Returns the crc32 checksum of the segment starting from crc. + * Returns the crc32 cumulative checksum including the segment starting + * from crc. */ -static u32 rxe_crc32(struct rxe_dev *rxe, u32 crc, void *addr, size_t len) +static __be32 rxe_crc32(struct rxe_dev *rxe, __be32 crc, void *addr, + size_t len) { - u32 icrc; + __be32 icrc; int err; SHASH_DESC_ON_STACK(shash, rxe->tfm); shash->tfm = rxe->tfm; - *(u32 *)shash_desc_ctx(shash) = crc; + *(__be32 *)shash_desc_ctx(shash) = crc; err = crypto_shash_update(shash, addr, len); if (unlikely(err)) { pr_warn_ratelimited("failed crc calculation, err: %d\n", err); return crc32_le(crc, addr, len); } - icrc = *(u32 *)shash_desc_ctx(shash); + icrc = *(__be32 *)shash_desc_ctx(shash); barrier_data(shash_desc_ctx(shash)); return icrc; @@ -39,19 +64,16 @@ static u32 rxe_crc32(struct rxe_dev *rxe, u32 crc, void *addr, size_t len) /** * rxe_icrc_hdr - Compute a partial ICRC for the IB transport headers. - * @pkt: Information about the current packet - * @skb: The packet buffer + * @pkt: packet information + * @skb: packet buffer * * Returns the partial ICRC */ static u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb) { - unsigned int bth_offset = 0; - struct iphdr *ip4h = NULL; - struct ipv6hdr *ip6h = NULL; struct udphdr *udph; struct rxe_bth *bth; - int crc; + __be32 crc; int length; int hdr_size = sizeof(struct udphdr) + (skb->protocol == htons(ETH_P_IP) ? @@ -69,6 +91,8 @@ static u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb) crc = 0xdebb20e3; if (skb->protocol == htons(ETH_P_IP)) { /* IPv4 */ + struct iphdr *ip4h = NULL; + memcpy(pshdr, ip_hdr(skb), hdr_size); ip4h = (struct iphdr *)pshdr; udph = (struct udphdr *)(ip4h + 1); @@ -77,6 +101,8 @@ static u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb) ip4h->check = CSUM_MANGLED_0; ip4h->tos = 0xff; } else { /* IPv6 */ + struct ipv6hdr *ip6h = NULL; + memcpy(pshdr, ipv6_hdr(skb), hdr_size); ip6h = (struct ipv6hdr *)pshdr; udph = (struct udphdr *)(ip6h + 1); @@ -85,12 +111,9 @@ static u32 rxe_icrc_hdr(struct rxe_pkt_info *pkt, struct sk_buff *skb) ip6h->priority = 0xf; ip6h->hop_limit = 0xff; } - udph->check = CSUM_MANGLED_0; - - bth_offset += hdr_size; - memcpy(&pshdr[bth_offset], pkt->hdr, RXE_BTH_BYTES); - bth = (struct rxe_bth *)&pshdr[bth_offset]; + bth = (struct rxe_bth *)(udph + 1); + memcpy(bth, pkt->hdr, RXE_BTH_BYTES); /* exclude bth.resv8a */ bth->qpn |= cpu_to_be32(~BTH_QPN_MASK); @@ -115,18 +138,18 @@ int rxe_icrc_check(struct sk_buff *skb) { struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); __be32 *icrcp; - u32 pkt_icrc; - u32 icrc; + __be32 packet_icrc; + __be32 computed_icrc; icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE); - pkt_icrc = be32_to_cpu(*icrcp); + packet_icrc = *icrcp; - icrc = rxe_icrc_hdr(pkt, skb); - icrc = rxe_crc32(pkt->rxe, icrc, (u8 *)payload_addr(pkt), - payload_size(pkt) + bth_pad(pkt)); - icrc = (__force u32)cpu_to_be32(~icrc); + computed_icrc = rxe_icrc_hdr(pkt, skb); + computed_icrc = rxe_crc32(pkt->rxe, computed_icrc, + (u8 *)payload_addr(pkt), payload_size(pkt) + bth_pad(pkt)); + computed_icrc = ~computed_icrc; - if (unlikely(icrc != pkt_icrc)) { + if (unlikely(computed_icrc != packet_icrc)) { if (skb->protocol == htons(ETH_P_IPV6)) pr_warn_ratelimited("bad ICRC from %pI6c\n", &ipv6_hdr(skb)->saddr); @@ -150,7 +173,7 @@ int rxe_icrc_check(struct sk_buff *skb) void rxe_icrc_generate(struct rxe_pkt_info *pkt, struct sk_buff *skb) { __be32 *icrcp; - u32 icrc; + __be32 icrc; icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE); icrc = rxe_icrc_hdr(pkt, skb); diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index b08689b664ec..f98378f8ff31 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -193,6 +193,7 @@ int rxe_requester(void *arg); int rxe_responder(void *arg); /* rxe_icrc.c */ +int rxe_icrc_init(struct rxe_dev *rxe); int rxe_icrc_check(struct sk_buff *skb); void rxe_icrc_generate(struct rxe_pkt_info *pkt, struct sk_buff *skb); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index c223959ac174..f7b1a1f64c13 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -1154,7 +1154,6 @@ int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name) { int err; struct ib_device *dev = &rxe->ib_dev; - struct crypto_shash *tfm; strscpy(dev->node_desc, "rxe", sizeof(dev->node_desc)); @@ -1173,13 +1172,9 @@ int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name) if (err) return err; - tfm = crypto_alloc_shash("crc32", 0, 0); - if (IS_ERR(tfm)) { - pr_err("failed to allocate crc algorithm err:%ld\n", - PTR_ERR(tfm)); - return PTR_ERR(tfm); - } - rxe->tfm = tfm; + err = rxe_icrc_init(rxe); + if (err) + return err; err = ib_register_device(dev, ibdev_name, NULL); if (err) From patchwork Tue Jun 29 20:14:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12350793 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB30EC11F68 for ; Tue, 29 Jun 2021 20:14:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A408561358 for ; Tue, 29 Jun 2021 20:14:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235231AbhF2UQ4 (ORCPT ); Tue, 29 Jun 2021 16:16:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38468 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235199AbhF2UQz (ORCPT ); Tue, 29 Jun 2021 16:16:55 -0400 Received: from mail-ot1-x32e.google.com (mail-ot1-x32e.google.com [IPv6:2607:f8b0:4864:20::32e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01651C061766 for ; Tue, 29 Jun 2021 13:14:27 -0700 (PDT) Received: by mail-ot1-x32e.google.com with SMTP id o17-20020a9d76510000b02903eabfc221a9so128425otl.0 for ; Tue, 29 Jun 2021 13:14:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/1OQ8iiUM5jBS3SROmYbJG4M9djpGPpNOKrJeUwIAuE=; b=GsiJ4e6OOU6bHOQKo3mv4V+ndF37jTgCGBz9Ek8wHmZ2VGsoe+L8Boap2l4q8j6Ym7 zQqUCZt8sMVmg9o0tDN5KlyOxH6eOpZr62NYfjJhdP6QIAan6Gie3MTqDcgIsu/NipNK 2NluIMBgtQXEiGlULA1zqITan0+tpzJp4hf6jF5tAMgffuz7cmTHIxS61i7aKTAnIXPF 3CkCw71STGVNBsq6jZBSQCGGeVKCfskvvydhXx7yIuldvXQdfa877yrAb9vHPYS4IIaE do56y4W3J9Wh/WyOOpFs9kFvr0timTQ0QHnbEz+aN9NxybyeGj80Xschn9ALKjdqHaDM JHfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/1OQ8iiUM5jBS3SROmYbJG4M9djpGPpNOKrJeUwIAuE=; b=C2gXDFsVFhTn+jC3hq88EN5kaYW3kNS4lcxsLpSjAEzxb6p3rOYfVDWVcED3EyEUyt RVBcSj794ryoITez3roOnejMRVj6igTwIhE7WHjuAZ35EicRwtBKcdN0+BaP5JOQjbZ3 y+SMy/UIp0Z1dnXpHAL0aOvgCj/ORC6Ew1NNwMD8UzKNmMXvAOUYs14TiyY5mmc8Bs/5 vWALvB/6Nl0gjD3c0US7Q6eJ2eGUjqeb2r8a7827DfbH3A6tttM/p6pNthCg64t3lspu d3goRvu3JUy9+tuDL47k3akPy9sOhIJRL5EQZXPLNCQXGJURQGccymv4d+8WDYPtLvns UdBg== X-Gm-Message-State: AOAM532EkCRoLrdC4Fkiglmcf3pWjyuYiZ9IZdlrBXza2m+UaTOVNmoy LIr2RgeQWRLH0g5y6hVHci8= X-Google-Smtp-Source: ABdhPJy5FWikh4fYDkl9kk9ge8cZKvvNBmmNTgySOb9ONR303LUOKxLjPIVTgXFHbqKxg8rmk5m+fQ== X-Received: by 2002:a9d:baa:: with SMTP id 39mr5839469oth.159.1624997666434; Tue, 29 Jun 2021 13:14:26 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-aaa9-75eb-6e0f-9f85.res6.spectrum.com. [2603:8081:140c:1a00:aaa9:75eb:6e0f:9f85]) by smtp.gmail.com with ESMTPSA id e29sm4159499oiy.53.2021.06.29.13.14.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Jun 2021 13:14:26 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH 6/7] RDMA/rxe: Add parameters to control checking/generating ICRC Date: Tue, 29 Jun 2021 15:14:11 -0500 Message-Id: <20210629201412.28306-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210629201412.28306-1-rpearsonhpe@gmail.com> References: <20210629201412.28306-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add module parameters rxe_must_check_icrc and rxe_must_generat_icrc which default to 1 and which suppresses checking/generating ICRC if set to 0. The parameter is displayed in /sys as "check_icrc" or "generate_icrc". Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 9 +++++++++ drivers/infiniband/sw/rxe/rxe.h | 4 ++++ drivers/infiniband/sw/rxe/rxe_net.c | 3 ++- drivers/infiniband/sw/rxe/rxe_recv.c | 8 +++++--- 4 files changed, 20 insertions(+), 4 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 8e0f9c489cab..08de3ef9f1f2 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -15,6 +15,15 @@ MODULE_LICENSE("Dual BSD/GPL"); bool rxe_initialized; +/* If set to false these parameters disable checking and/or generating + * the packet ICRC + */ +bool rxe_must_check_icrc = true; +module_param_named(check_icrc, rxe_must_check_icrc, bool, 0660); + +bool rxe_must_generate_icrc = true; +module_param_named(generate_icrc, rxe_must_generate_icrc, bool, 0660); + /* free resources for a rxe device all objects created for this device must * have been destroyed */ diff --git a/drivers/infiniband/sw/rxe/rxe.h b/drivers/infiniband/sw/rxe/rxe.h index 1bb3fb618bf5..a5083a924a6f 100644 --- a/drivers/infiniband/sw/rxe/rxe.h +++ b/drivers/infiniband/sw/rxe/rxe.h @@ -13,6 +13,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include +#include #include #include @@ -39,6 +40,9 @@ #define RXE_ROCE_V2_SPORT (0xc000) +extern bool rxe_must_check_icrc; +extern bool rxe_must_generate_icrc; + extern bool rxe_initialized; void rxe_set_mtu(struct rxe_dev *rxe, unsigned int dev_mtu); diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index 3860281a3a90..4d109e5b33ff 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -434,7 +434,8 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, goto drop; } - rxe_icrc_generate(skb, pkt); + if (rxe_must_generate_icrc) + rxe_icrc_generate(skb, pkt); if (pkt->mask & RXE_LOOPBACK_MASK) { memcpy(SKB_TO_PKT(skb), pkt, sizeof(*pkt)); diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 8582b3163e2c..01d425b3991e 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -382,9 +382,11 @@ void rxe_rcv(struct sk_buff *skb) if (unlikely(err)) goto drop; - err = rxe_icrc_check(skb); - if (unlikely(err)) - goto drop; + if (rxe_must_check_icrc) { + err = rxe_icrc_check(skb); + if (unlikely(err)) + goto drop; + } rxe_counter_inc(rxe, RXE_CNT_RCVD_PKTS); From patchwork Tue Jun 29 20:14:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12350797 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BF75C11F6B for ; Tue, 29 Jun 2021 20:14:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 72F6A61358 for ; Tue, 29 Jun 2021 20:14:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235228AbhF2UQz (ORCPT ); Tue, 29 Jun 2021 16:16:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235193AbhF2UQz (ORCPT ); Tue, 29 Jun 2021 16:16:55 -0400 Received: from mail-ot1-x32f.google.com (mail-ot1-x32f.google.com [IPv6:2607:f8b0:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C90B0C061760 for ; Tue, 29 Jun 2021 13:14:27 -0700 (PDT) Received: by mail-ot1-x32f.google.com with SMTP id 7-20020a9d0d070000b0290439abcef697so83423oti.2 for ; Tue, 29 Jun 2021 13:14:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=83jcdjqOxj4IGVIWFFudfNiQpYOzJrSwMsAJcs3NArg=; b=L8htd7YoXLYKx47KenhBqtyenkuezyohtmXGKolicRwOb5dCwz6gyUxoDpR22dHX+2 4rozx9EDiaJA/xIWmh6YRjRb+CUeG9v7h2KROc2dXUe5TpIyD7TeIO7Z6Bx21jXtHHUc SnLVw3jIMzJsGbdp1H+26W+yHqJvARSOfmly/fqvpfLvOuLBDw+mJVk6EHHsG6SADx3r /yNK8xKerWWCU3oyINKFVWCARvbC0wiTLCei0j/Ud9yvC0pVev3qCb/nOXMYSfOlvv4m h+hPbAkZvZUESlDqkdYP6s9PKix2CAvS7hOYwyZ+j0PmY+E/Ld6LFXeXir2++P/PQf/O pA3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=83jcdjqOxj4IGVIWFFudfNiQpYOzJrSwMsAJcs3NArg=; b=KORCIf8VH6ZLYG+Raml5tZeX0RXCBr1ST5k6rD4tqFxSxuIn8pOIMm1mXtHChlcEA9 3fRW0mbWZgwMAEX4/9hfrgAivbkZdIumWHrHGfgqS5AHBpnBdy1bUgdSLEAcL2wN25E+ F0rdMv41gFJR9eABtqGPN/Po939pG9zkKxmtXOba5R3zf1T4+7ZFSy/+LICO7k10UqKz nT+iKrdwkTq3PsWJsLcQK1x8JDnGsT4rCTRmqwH4sk5Id+w9Vb9eYQL1xtJfBnovKD1q AYYUX9ICZp2u4OwaKqn7cc3/G/Z07dFSYeRUho6KH4OOAAg70zWO1OlnHi+yCSOxQ4xO vtyQ== X-Gm-Message-State: AOAM532tSuFqAJXPxF7E3JnxwFChAUtvaOZ0D4Q+iMUwadQr/zccmDNy /UE6sNY4tMAb6g15eC6auSZn1bAAOQw= X-Google-Smtp-Source: ABdhPJz+2t0xId9sPrgsDodZYPyTAFiGXKz8koz3oX/9+a2rfPF+FOIyNGe2WJsa24FfLyGaZYo4IA== X-Received: by 2002:a05:6830:241d:: with SMTP id j29mr6123678ots.371.1624997667162; Tue, 29 Jun 2021 13:14:27 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-aaa9-75eb-6e0f-9f85.res6.spectrum.com. [2603:8081:140c:1a00:aaa9:75eb:6e0f:9f85]) by smtp.gmail.com with ESMTPSA id o189sm448624oif.54.2021.06.29.13.14.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Jun 2021 13:14:26 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH 7/7] RDMA/rxe: Extend ICRC to support nonlinear skbs Date: Tue, 29 Jun 2021 15:14:12 -0500 Message-Id: <20210629201412.28306-8-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210629201412.28306-1-rpearsonhpe@gmail.com> References: <20210629201412.28306-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Make ICRC calculations aware of potential non-linear skbs. This is a step towards getting rid of skb_linearize() and its extra data copy. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_icrc.c | 150 +++++++++++++++++---------- drivers/infiniband/sw/rxe/rxe_loc.h | 4 +- drivers/infiniband/sw/rxe/rxe_net.c | 7 +- drivers/infiniband/sw/rxe/rxe_recv.c | 2 +- 4 files changed, 103 insertions(+), 60 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_icrc.c b/drivers/infiniband/sw/rxe/rxe_icrc.c index f5ebd9d23d12..d730c76bbeae 100644 --- a/drivers/infiniband/sw/rxe/rxe_icrc.c +++ b/drivers/infiniband/sw/rxe/rxe_icrc.c @@ -63,97 +63,134 @@ static __be32 rxe_crc32(struct rxe_dev *rxe, __be32 crc, void *addr, } /** - * rxe_icrc_hdr - Compute a partial ICRC for the IB transport headers. + * rxe_icrc_packet - Compute the ICRC for a packet * @skb: packet buffer * @pkt: packet information + * @icrcp: pointer to returned ICRC * - * Returns the partial ICRC + * Support linear or nonlinear skbs with frags + * + * Returns ICRC in *icrcp and 0 if no error occurs + * else returns an error. * For details see the InfiniBand Architecture spec and Annex 17 * the RoCE v2 spec. */ -static __be32 rxe_icrc_hdr(struct sk_buff *skb, struct rxe_pkt_info *pkt) +static int rxe_icrc_packet(struct sk_buff *skb, struct rxe_pkt_info *pkt, + __be32 *icrcp) { + struct skb_shared_info *info = skb_shinfo(skb); + struct rxe_dev *rxe = pkt->rxe; + struct iphdr *ip4h; + struct ipv6hdr *ip6h; struct udphdr *udph; struct rxe_bth *bth; - __be32 crc; - int length; - int hdr_size = sizeof(struct udphdr) + + __be32 icrc; + int hdr_size; + u8 pseudo_hdr[128]; + int resid; + int bytes; + int nfrag; + skb_frag_t *frag; + u8 *addr; + int page_offset; + int start; + int len; + int ret; + + hdr_size = rxe_opcode[pkt->opcode].length + sizeof(struct udphdr) + (skb->protocol == htons(ETH_P_IP) ? - sizeof(struct iphdr) : sizeof(struct ipv6hdr)); - /* pseudo header buffer size is calculate using ipv6 header size since - * it is bigger than ipv4 - */ - u8 pshdr[sizeof(struct udphdr) + - sizeof(struct ipv6hdr) + - RXE_BTH_BYTES]; - - /* This seed is the result of computing a CRC with a seed of - * 0xfffffff and 8 bytes of 0xff representing a masked LRH. - */ - crc = 0xdebb20e3; + sizeof(struct iphdr) : sizeof(struct ipv6hdr)); - if (skb->protocol == htons(ETH_P_IP)) { /* IPv4 */ - struct iphdr *ip4h; + start = skb->network_header + skb->head - skb->data; + ret = skb_copy_bits(skb, start, pseudo_hdr, hdr_size); + if (unlikely(ret)) { + pr_warn_ratelimited("Malformed skb\n"); + return ret; + } - memcpy(pshdr, ip_hdr(skb), hdr_size); - ip4h = (struct iphdr *)pshdr; + if (skb->protocol == htons(ETH_P_IP)) { /* IPv4 */ + ip4h = (struct iphdr *)pseudo_hdr; udph = (struct udphdr *)(ip4h + 1); + bth = (struct rxe_bth *)(udph + 1); ip4h->ttl = 0xff; ip4h->check = CSUM_MANGLED_0; ip4h->tos = 0xff; } else { /* IPv6 */ - struct ipv6hdr *ip6h; - - memcpy(pshdr, ipv6_hdr(skb), hdr_size); - ip6h = (struct ipv6hdr *)pshdr; + ip6h = (struct ipv6hdr *)pseudo_hdr; udph = (struct udphdr *)(ip6h + 1); + bth = (struct rxe_bth *)(udph + 1); - memset(ip6h->flow_lbl, 0xff, sizeof(ip6h->flow_lbl)); ip6h->priority = 0xf; ip6h->hop_limit = 0xff; } udph->check = CSUM_MANGLED_0; - - bth = (struct rxe_bth *)(udph + 1); - memcpy(bth, pkt->hdr, RXE_BTH_BYTES); - - /* exclude bth.resv8a */ bth->qpn |= cpu_to_be32(~BTH_QPN_MASK); - length = hdr_size + RXE_BTH_BYTES; - crc = rxe_crc32(pkt->rxe, crc, pshdr, length); + icrc = 0xdebb20e3; + icrc = rxe_crc32(pkt->rxe, icrc, pseudo_hdr, hdr_size); + + resid = (payload_size(pkt) + 0x3) & ~0x3; + nfrag = -1; + + while (resid) { + if (nfrag < 0) { + addr = skb_network_header(skb) + hdr_size; + len = skb_tail_pointer(skb) - skb_network_header(skb); + } else if (nfrag < info->nr_frags) { + frag = &info->frags[nfrag]; + page_offset = frag->bv_offset + hdr_size; + addr = kmap_atomic(frag->bv_page) + page_offset; + len = frag->bv_len; + } else { + pr_warn_ratelimited("Malformed skb\n"); + return -EINVAL; + } + + bytes = len - hdr_size; + if (bytes > 0) { + if (bytes > resid) + bytes = resid; + icrc = rxe_crc32(rxe, icrc, addr, bytes); + resid -= bytes; + hdr_size = 0; + } else { + hdr_size -= len; + } + + if (nfrag++ >= 0) + kunmap_atomic(addr); + } + + *icrcp = ~icrc; - /* And finish to compute the CRC on the remainder of the headers. */ - crc = rxe_crc32(pkt->rxe, crc, pkt->hdr + RXE_BTH_BYTES, - rxe_opcode[pkt->opcode].length - RXE_BTH_BYTES); - return crc; + return 0; } /** * rxe_check_icrc - Compute ICRC for a packet and compare to the ICRC - * delivered in the packet. - * @skb: packet buffer with packet info in cb[] (receive path) + * in the packet. + * @skb: packet buffer + * @pkt: packet information * * Returns 0 if the ICRCs match or an error on failure */ -int rxe_icrc_check(struct sk_buff *skb) +int rxe_icrc_check(struct sk_buff *skb, struct rxe_pkt_info *pkt) { - struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); __be32 *icrcp; __be32 packet_icrc; - __be32 computed_icrc; + __be32 icrc; + int ret; icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE); packet_icrc = *icrcp; - computed_icrc = rxe_icrc_hdr(skb, pkt); - computed_icrc = rxe_crc32(pkt->rxe, computed_icrc, - (u8 *)payload_addr(pkt), payload_size(pkt) + bth_pad(pkt)); - computed_icrc = ~computed_icrc; + ret = rxe_icrc_packet(skb, pkt, &icrc); + if (unlikely(ret)) + return ret; - if (unlikely(computed_icrc != packet_icrc)) { + if (unlikely(icrc != packet_icrc)) { if (skb->protocol == htons(ETH_P_IPV6)) pr_warn_ratelimited("bad ICRC from %pI6c\n", &ipv6_hdr(skb)->saddr); @@ -162,7 +199,6 @@ int rxe_icrc_check(struct sk_buff *skb) &ip_hdr(skb)->saddr); else pr_warn_ratelimited("bad ICRC from unknown\n"); - return -EINVAL; } @@ -174,15 +210,19 @@ int rxe_icrc_check(struct sk_buff *skb) * correct position after the payload and pad. * @skb: packet buffer * @pkt: packet information + * + * Returns 0 on success or an error */ -void rxe_icrc_generate(struct sk_buff *skb, struct rxe_pkt_info *pkt) +int rxe_icrc_generate(struct sk_buff *skb, struct rxe_pkt_info *pkt) { __be32 *icrcp; - __be32 icrc; + int ret; icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE); - icrc = rxe_icrc_hdr(skb, pkt); - icrc = rxe_crc32(pkt->rxe, icrc, (u8 *)payload_addr(pkt), - payload_size(pkt) + bth_pad(pkt)); - *icrcp = ~icrc; + + ret = rxe_icrc_packet(skb, pkt, icrcp); + if (unlikely(ret)) + return ret; + + return 0; } diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index e8e87336469b..09836cdb1e89 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -194,8 +194,8 @@ int rxe_responder(void *arg); /* rxe_icrc.c */ int rxe_icrc_init(struct rxe_dev *rxe); -int rxe_icrc_check(struct sk_buff *skb); -void rxe_icrc_generate(struct sk_buff *skb, struct rxe_pkt_info *pkt); +int rxe_icrc_check(struct sk_buff *skb, struct rxe_pkt_info *pkt); +int rxe_icrc_generate(struct sk_buff *skb, struct rxe_pkt_info *pkt); void rxe_resp_queue_pkt(struct rxe_qp *qp, struct sk_buff *skb); diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index 4d109e5b33ff..d708ff19e774 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -434,8 +434,11 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, goto drop; } - if (rxe_must_generate_icrc) - rxe_icrc_generate(skb, pkt); + if (rxe_must_generate_icrc) { + err = rxe_icrc_generate(skb, pkt); + if (unlikely(err)) + goto drop; + } if (pkt->mask & RXE_LOOPBACK_MASK) { memcpy(SKB_TO_PKT(skb), pkt, sizeof(*pkt)); diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 01d425b3991e..7f51b9e92437 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -383,7 +383,7 @@ void rxe_rcv(struct sk_buff *skb) goto drop; if (rxe_must_check_icrc) { - err = rxe_icrc_check(skb); + err = rxe_icrc_check(skb, pkt); if (unlikely(err)) goto drop; }