From patchwork Thu Jul 27 19:28:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13330610 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46773C04A94 for ; Thu, 27 Jul 2023 19:29:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231912AbjG0T3Z (ORCPT ); Thu, 27 Jul 2023 15:29:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231483AbjG0T3Y (ORCPT ); Thu, 27 Jul 2023 15:29:24 -0400 Received: from mail-oa1-x2b.google.com (mail-oa1-x2b.google.com [IPv6:2001:4860:4864:20::2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3773F2D6A for ; Thu, 27 Jul 2023 12:29:23 -0700 (PDT) Received: by mail-oa1-x2b.google.com with SMTP id 586e51a60fabf-1bbdddd3c94so625584fac.0 for ; Thu, 27 Jul 2023 12:29:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690486162; x=1691090962; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=m8vVWLv3xHOZsScN9ExGukhndZ4N4biDRmrkjSQVWlU=; b=RBZsy8svdRHsXbsU4ps114panTuoFv6NDFQcWpQ0L9FOy2zj3Ze0VzEkxRuPCts2kk cPj3w8sgmZ+0q+rnpcIOehfLC/S6925J8cdb9IeCo9lk5vuvPZK7lGGCpCv3UPGnPk4j l8Q4977TIn76bkrFMT2MzoJpEDIdZAyS8gkDyEvT1ky/0pRnb3llm+hVdenPmLhAxAbl frVoVzeaV/8RmEgW0mhbeyR2C8Kg4W8BLSe9Qx6pmP0Yvt7F4kXOjqwv0nHoy2oiD/uL MNP38ZYFtrZkNP5hc+ZgCRXVeB8tHMo3FEAoSrtAgosWmYIF4gYWY8967EvaefABGRZG BRcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690486162; x=1691090962; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=m8vVWLv3xHOZsScN9ExGukhndZ4N4biDRmrkjSQVWlU=; b=D5EqQmWGuwlFJ/hVDuEBgpmZHSwtxqf7niNnnLvnzIqVXlZ9Txe6qdaoQYC6fuM6te hfGUydgM/rlTL5R3wMweOL+R9sf7JfAuVMTVeX4Roez7Pc4KAVI+Xyn4u7ClCqk/BLhU t+0AHP0ikcGnZAWrcktevDSOgrFP+85/2youhQNVaDGpo/IZYthE33fDAMogFnolc5AF byvQkq2a6cN7bLDDU728yaGZlAJLUs36ZTKEsSA7ZWrE8TP5EcMQKajMHf+JM8sT8nHJ QJioSDtgdibcZeNqFSzs1dQRVm7bLhY/+TlZrbC5EkBEFqX+XKD1wCtTSKysXrDVK+UG HKIg== X-Gm-Message-State: ABy/qLYO2w4tM0eiaAuaEPw+ne1k7OY7g4gcPOg+fuJjMhQ5dyu6Bxix eG6cIDnIuR/MODf8o5UB/vk= X-Google-Smtp-Source: APBJJlFZzFe475gSFYaduncHZLipWHgoFlppN+eO3L0VxTcwIersediMPicaDayxHKTWZEki+vYpgA== X-Received: by 2002:a05:6870:eca1:b0:1b0:5fc0:e2b5 with SMTP id eo33-20020a056870eca100b001b05fc0e2b5mr420120oab.53.1690486162485; Thu, 27 Jul 2023 12:29:22 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-a360-d7ee-0b00-a1d3.res6.spectrum.com. [2603:8081:140c:1a00:a360:d7ee:b00:a1d3]) by smtp.gmail.com with ESMTPSA id f185-20020a4a58c2000000b005658aed310bsm955354oob.15.2023.07.27.12.29.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 12:29:22 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org, jhack@hpe.com Cc: Bob Pearson Subject: [PATCH for-next v3 1/8] RDMA/rxe: Add pad size to struct rxe_pkt_info Date: Thu, 27 Jul 2023 14:28:25 -0500 Message-Id: <20230727192831.65495-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727192831.65495-1-rpearsonhpe@gmail.com> References: <20230727192831.65495-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add the packet pad size to struct rxe_pkt_info and use this to simplify references to pad size in the rxe driver. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_hdr.h | 1 + drivers/infiniband/sw/rxe/rxe_icrc.c | 4 ++-- drivers/infiniband/sw/rxe/rxe_recv.c | 1 + drivers/infiniband/sw/rxe/rxe_req.c | 20 ++++++++++---------- drivers/infiniband/sw/rxe/rxe_resp.c | 24 +++++++++++------------- 5 files changed, 25 insertions(+), 25 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_hdr.h b/drivers/infiniband/sw/rxe/rxe_hdr.h index 46f82b27fcd2..1dcdb87fa01a 100644 --- a/drivers/infiniband/sw/rxe/rxe_hdr.h +++ b/drivers/infiniband/sw/rxe/rxe_hdr.h @@ -22,6 +22,7 @@ struct rxe_pkt_info { u16 paylen; /* length of bth - icrc */ u8 port_num; /* port pkt received on */ u8 opcode; /* bth opcode of packet */ + u8 pad; /* pad size of packet */ }; /* Macros should be used only for received skb */ diff --git a/drivers/infiniband/sw/rxe/rxe_icrc.c b/drivers/infiniband/sw/rxe/rxe_icrc.c index fdf5f08cd8f1..c9aa0995e900 100644 --- a/drivers/infiniband/sw/rxe/rxe_icrc.c +++ b/drivers/infiniband/sw/rxe/rxe_icrc.c @@ -148,7 +148,7 @@ int rxe_icrc_check(struct sk_buff *skb, struct rxe_pkt_info *pkt) icrc = rxe_icrc_hdr(skb, pkt); icrc = rxe_crc32(pkt->rxe, icrc, (u8 *)payload_addr(pkt), - payload_size(pkt) + bth_pad(pkt)); + payload_size(pkt) + pkt->pad); icrc = ~icrc; if (unlikely(icrc != pkt_icrc)) @@ -170,6 +170,6 @@ void rxe_icrc_generate(struct sk_buff *skb, struct rxe_pkt_info *pkt) icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE); icrc = rxe_icrc_hdr(skb, pkt); icrc = rxe_crc32(pkt->rxe, icrc, (u8 *)payload_addr(pkt), - payload_size(pkt) + bth_pad(pkt)); + payload_size(pkt) + pkt->pad); *icrcp = ~icrc; } diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 5861e4244049..f912a913f89a 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -329,6 +329,7 @@ void rxe_rcv(struct sk_buff *skb) pkt->psn = bth_psn(pkt); pkt->qp = NULL; pkt->mask |= rxe_opcode[pkt->opcode].mask; + pkt->pad = bth_pad(pkt); if (unlikely(skb->len < header_size(pkt))) goto drop; diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index d8c41fd626a9..31858761ca1e 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -420,18 +420,17 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct sk_buff *skb; struct rxe_send_wr *ibwr = &wqe->wr; - int pad = (-payload) & 0x3; - int paylen; int solicited; u32 qp_num; int ack_req; /* length from start of bth to end of icrc */ - paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE; - pkt->paylen = paylen; + pkt->pad = (-payload) & 0x3; + pkt->paylen = rxe_opcode[opcode].length + payload + + pkt->pad + RXE_ICRC_SIZE; /* init skb */ - skb = rxe_init_packet(rxe, av, paylen, pkt); + skb = rxe_init_packet(rxe, av, pkt->paylen, pkt); if (unlikely(!skb)) return NULL; @@ -450,7 +449,8 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, if (ack_req) qp->req.noack_pkts = 0; - bth_init(pkt, pkt->opcode, solicited, 0, pad, IB_DEFAULT_PKEY_FULL, qp_num, + bth_init(pkt, pkt->opcode, solicited, 0, pkt->pad, + IB_DEFAULT_PKEY_FULL, qp_num, ack_req, pkt->psn); /* init optional headers */ @@ -499,6 +499,7 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_av *av, struct rxe_send_wqe *wqe, struct rxe_pkt_info *pkt, struct sk_buff *skb, u32 payload) { + u8 *pad_addr; int err; err = rxe_prepare(av, pkt, skb); @@ -520,10 +521,9 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_av *av, if (err) return err; } - if (bth_pad(pkt)) { - u8 *pad = payload_addr(pkt) + payload; - - memset(pad, 0, bth_pad(pkt)); + if (pkt->pad) { + pad_addr = payload_addr(pkt) + payload; + memset(pad_addr, 0, pkt->pad); } } else if (pkt->mask & RXE_FLUSH_MASK) { /* oA19-2: shall have no payload. */ diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 64c64f5f36a8..fc2f55329fa2 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -525,7 +525,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, skip_check_range: if (pkt->mask & (RXE_WRITE_MASK | RXE_ATOMIC_WRITE_MASK)) { if (resid > mtu) { - if (pktlen != mtu || bth_pad(pkt)) { + if (pktlen != mtu || pkt->pad) { state = RESPST_ERR_LENGTH; goto err; } @@ -534,7 +534,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, state = RESPST_ERR_LENGTH; goto err; } - if ((bth_pad(pkt) != (0x3 & (-resid)))) { + if ((pkt->pad != (0x3 & (-resid)))) { /* This case may not be exactly that * but nothing else fits. */ @@ -766,27 +766,25 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct sk_buff *skb; - int paylen; - int pad; int err; /* * allocate packet */ - pad = (-payload) & 0x3; - paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE; + ack->pad = (-payload) & 0x3; + ack->paylen = rxe_opcode[opcode].length + payload + + ack->pad + RXE_ICRC_SIZE; - skb = rxe_init_packet(rxe, &qp->pri_av, paylen, ack); + skb = rxe_init_packet(rxe, &qp->pri_av, ack->paylen, ack); if (!skb) return NULL; ack->qp = qp; ack->opcode = opcode; ack->mask = rxe_opcode[opcode].mask; - ack->paylen = paylen; ack->psn = psn; - bth_init(ack, opcode, 0, 0, pad, IB_DEFAULT_PKEY_FULL, + bth_init(ack, opcode, 0, 0, ack->pad, IB_DEFAULT_PKEY_FULL, qp->attr.dest_qp_num, 0, psn); if (ack->mask & RXE_AETH_MASK) { @@ -874,6 +872,7 @@ static enum resp_states read_reply(struct rxe_qp *qp, int err; struct resp_res *res = qp->resp.res; struct rxe_mr *mr; + u8 *pad_addr; if (!res) { res = rxe_prepare_res(qp, req_pkt, RXE_READ_MASK); @@ -932,10 +931,9 @@ static enum resp_states read_reply(struct rxe_qp *qp, goto err_out; } - if (bth_pad(&ack_pkt)) { - u8 *pad = payload_addr(&ack_pkt) + payload; - - memset(pad, 0, bth_pad(&ack_pkt)); + if (ack_pkt.pad) { + pad_addr = payload_addr(&ack_pkt) + payload; + memset(pad_addr, 0, ack_pkt.pad); } /* rxe_xmit_packet always consumes the skb */ From patchwork Thu Jul 27 19:28:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13330611 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92966C04A94 for ; Thu, 27 Jul 2023 19:29:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232197AbjG0T3d (ORCPT ); Thu, 27 Jul 2023 15:29:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232131AbjG0T3Z (ORCPT ); Thu, 27 Jul 2023 15:29:25 -0400 Received: from mail-oo1-xc2c.google.com (mail-oo1-xc2c.google.com [IPv6:2607:f8b0:4864:20::c2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 49E1C2D68 for ; Thu, 27 Jul 2023 12:29:24 -0700 (PDT) Received: by mail-oo1-xc2c.google.com with SMTP id 006d021491bc7-5607cdb0959so772597eaf.2 for ; Thu, 27 Jul 2023 12:29:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690486163; x=1691090963; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uNMwUm9bjNnK7SYOx3BfAILkRaWNStS5XUcYTZh2C8o=; b=J/630uKxCerImsloN7okbClP5sRUrdICiz2UpRXt3HBSLgPmKRDzGhrfokKn5lslE3 +zX+0x5PBRYQlB2rcpsZfQSjVBVIc4vxQ3vm3T9jdcL5Oc2OAZirVuG7tCqnYV5L6pLI /0Z1vuRajCG4pYmWMpP7QDLTV2RV15TFfwW86YaU/QbgqGqdBDRT+HJ2txB+VWuAwE7Y SQrZhBxjgvDwmGg1BVKLYt3UaXATH8htWCfmqvVHxTDJ6ryif6fRgGUT9ER5rKbIxZ47 h+aEC5Xfo29yeoUKgWjuLPaG27B9mhhm8vTlkGp0WQBSuQ6oGe1dUdqVuorkowzkjd1b 3c0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690486163; x=1691090963; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uNMwUm9bjNnK7SYOx3BfAILkRaWNStS5XUcYTZh2C8o=; b=LXEaRKseZyLLOBIBCtQqofaaNSMC5S8pkF9X1+6vY5Kbl9/Wd47pS8+QfYpRFs7VlN xxBd4qMjvhH8qRXrjJgAP0ldJ6IQWEIJzgqAPJzx8iUqSMArMTU3fXr3X/VvGgKJm0kt 6BZOfa95aPE93u1L8P4qy6LbfbAu1u0qfoKXRdpVOMexdTpHXaBBK9kOzfgMSMpgUPIl q2ABJpGagG4UiAO7e1wTi+YL2E60JIAJdPBOaX1/fK6JgZsmEHQTTKRLHht95b2EtOUV 6MDrpY0Od2As07aN06QDD8GYpafLWQuFp0HKn6eo4hUhew1vopC6jcaINAFfAGUCnSjC BzXA== X-Gm-Message-State: ABy/qLbjoYM7gLagT81qnGd63WZ+S7G72eugz/Sy43iP0bs4y8BXDIFa vnxfv2nC0EKkcFqQfTBZi6A= X-Google-Smtp-Source: APBJJlHWLDoQLU7rrYyYKwXMdCkVA3FxaeWF/ISpKjrQtqescRQekC8+0eR0VJ8+Xig8xZK64kffog== X-Received: by 2002:a4a:7607:0:b0:566:fdb9:4378 with SMTP id t7-20020a4a7607000000b00566fdb94378mr426340ooc.1.1690486163408; Thu, 27 Jul 2023 12:29:23 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-a360-d7ee-0b00-a1d3.res6.spectrum.com. [2603:8081:140c:1a00:a360:d7ee:b00:a1d3]) by smtp.gmail.com with ESMTPSA id f185-20020a4a58c2000000b005658aed310bsm955354oob.15.2023.07.27.12.29.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 12:29:23 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org, jhack@hpe.com Cc: Bob Pearson Subject: [PATCH for-next v3 2/8] RDMA/rxe: Isolate code to fill request roce headers Date: Thu, 27 Jul 2023 14:28:26 -0500 Message-Id: <20230727192831.65495-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727192831.65495-1-rpearsonhpe@gmail.com> References: <20230727192831.65495-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Isolate the code to fill in roce headers in a request packet into a subroutine named rxe_init_roce_hdrs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_req.c | 108 +++++++++++++++------------- 1 file changed, 57 insertions(+), 51 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 31858761ca1e..6e9c8da001a4 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -411,86 +411,92 @@ static inline int get_mtu(struct rxe_qp *qp) return rxe->port.mtu_cap; } -static struct sk_buff *init_req_packet(struct rxe_qp *qp, - struct rxe_av *av, - struct rxe_send_wqe *wqe, - int opcode, u32 payload, - struct rxe_pkt_info *pkt) +static void rxe_init_roce_hdrs(struct rxe_qp *qp, struct rxe_send_wqe *wqe, + struct rxe_pkt_info *pkt) { - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); - struct sk_buff *skb; - struct rxe_send_wr *ibwr = &wqe->wr; - int solicited; - u32 qp_num; - int ack_req; - - /* length from start of bth to end of icrc */ - pkt->pad = (-payload) & 0x3; - pkt->paylen = rxe_opcode[opcode].length + payload + - pkt->pad + RXE_ICRC_SIZE; - - /* init skb */ - skb = rxe_init_packet(rxe, av, pkt->paylen, pkt); - if (unlikely(!skb)) - return NULL; + struct rxe_send_wr *wr = &wqe->wr; + int is_send; + int is_write_imm; + int is_end; + int solicited; + u32 dst_qpn; + u32 qkey; + int ack_req; /* init bth */ - solicited = (ibwr->send_flags & IB_SEND_SOLICITED) && - (pkt->mask & RXE_END_MASK) && - ((pkt->mask & (RXE_SEND_MASK)) || - (pkt->mask & (RXE_WRITE_MASK | RXE_IMMDT_MASK)) == - (RXE_WRITE_MASK | RXE_IMMDT_MASK)); - - qp_num = (pkt->mask & RXE_DETH_MASK) ? ibwr->wr.ud.remote_qpn : - qp->attr.dest_qp_num; - - ack_req = ((pkt->mask & RXE_END_MASK) || - (qp->req.noack_pkts++ > RXE_MAX_PKT_PER_ACK)); + is_send = pkt->mask & RXE_SEND_MASK; + is_write_imm = (pkt->mask & RXE_WRITE_MASK) && + (pkt->mask & RXE_IMMDT_MASK); + is_end = pkt->mask & RXE_END_MASK; + solicited = (wr->send_flags & IB_SEND_SOLICITED) && is_end && + (is_send || is_write_imm); + dst_qpn = (pkt->mask & RXE_DETH_MASK) ? wr->wr.ud.remote_qpn : + qp->attr.dest_qp_num; + ack_req = is_end || (qp->req.noack_pkts++ > RXE_MAX_PKT_PER_ACK); if (ack_req) qp->req.noack_pkts = 0; bth_init(pkt, pkt->opcode, solicited, 0, pkt->pad, - IB_DEFAULT_PKEY_FULL, qp_num, - ack_req, pkt->psn); + IB_DEFAULT_PKEY_FULL, dst_qpn, ack_req, pkt->psn); - /* init optional headers */ + /* init extended headers */ if (pkt->mask & RXE_RETH_MASK) { if (pkt->mask & RXE_FETH_MASK) - reth_set_rkey(pkt, ibwr->wr.flush.rkey); + reth_set_rkey(pkt, wr->wr.flush.rkey); else - reth_set_rkey(pkt, ibwr->wr.rdma.rkey); + reth_set_rkey(pkt, wr->wr.rdma.rkey); reth_set_va(pkt, wqe->iova); reth_set_len(pkt, wqe->dma.resid); } - /* Fill Flush Extension Transport Header */ if (pkt->mask & RXE_FETH_MASK) - feth_init(pkt, ibwr->wr.flush.type, ibwr->wr.flush.level); + feth_init(pkt, wr->wr.flush.type, wr->wr.flush.level); if (pkt->mask & RXE_IMMDT_MASK) - immdt_set_imm(pkt, ibwr->ex.imm_data); + immdt_set_imm(pkt, wr->ex.imm_data); if (pkt->mask & RXE_IETH_MASK) - ieth_set_rkey(pkt, ibwr->ex.invalidate_rkey); + ieth_set_rkey(pkt, wr->ex.invalidate_rkey); if (pkt->mask & RXE_ATMETH_MASK) { atmeth_set_va(pkt, wqe->iova); - if (opcode == IB_OPCODE_RC_COMPARE_SWAP) { - atmeth_set_swap_add(pkt, ibwr->wr.atomic.swap); - atmeth_set_comp(pkt, ibwr->wr.atomic.compare_add); + if (pkt->opcode == IB_OPCODE_RC_COMPARE_SWAP) { + atmeth_set_swap_add(pkt, wr->wr.atomic.swap); + atmeth_set_comp(pkt, wr->wr.atomic.compare_add); } else { - atmeth_set_swap_add(pkt, ibwr->wr.atomic.compare_add); + atmeth_set_swap_add(pkt, wr->wr.atomic.compare_add); } - atmeth_set_rkey(pkt, ibwr->wr.atomic.rkey); + atmeth_set_rkey(pkt, wr->wr.atomic.rkey); } if (pkt->mask & RXE_DETH_MASK) { - if (qp->ibqp.qp_num == 1) - deth_set_qkey(pkt, GSI_QKEY); - else - deth_set_qkey(pkt, ibwr->wr.ud.remote_qkey); - deth_set_sqp(pkt, qp->ibqp.qp_num); + qkey = (qp->ibqp.qp_num == 1) ? GSI_QKEY : + wr->wr.ud.remote_qkey; + deth_set_qkey(pkt, qkey); + deth_set_sqp(pkt, qp_num(qp)); } +} + +static struct sk_buff *init_req_packet(struct rxe_qp *qp, + struct rxe_av *av, + struct rxe_send_wqe *wqe, + int opcode, u32 payload, + struct rxe_pkt_info *pkt) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + struct sk_buff *skb; + + /* length from start of bth to end of icrc */ + pkt->pad = (-payload) & 0x3; + pkt->paylen = rxe_opcode[opcode].length + payload + + pkt->pad + RXE_ICRC_SIZE; + + /* init skb */ + skb = rxe_init_packet(rxe, av, pkt->paylen, pkt); + if (unlikely(!skb)) + return NULL; + + rxe_init_roce_hdrs(qp, wqe, pkt); return skb; } From patchwork Thu Jul 27 19:28:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13330612 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A08D6C41513 for ; Thu, 27 Jul 2023 19:29:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232220AbjG0T3e (ORCPT ); Thu, 27 Jul 2023 15:29:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231483AbjG0T30 (ORCPT ); Thu, 27 Jul 2023 15:29:26 -0400 Received: from mail-oo1-xc2b.google.com (mail-oo1-xc2b.google.com [IPv6:2607:f8b0:4864:20::c2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A46B2D75 for ; Thu, 27 Jul 2023 12:29:25 -0700 (PDT) Received: by mail-oo1-xc2b.google.com with SMTP id 006d021491bc7-5661eb57452so914838eaf.2 for ; Thu, 27 Jul 2023 12:29:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690486165; x=1691090965; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AW2eisZB8LFflZPdRQNrvlYPitHp5FxyhEVepQEMqXQ=; b=WFNJbAcYSC2d8swdv6RSfUFnhE7eXAdb+dicXRPsyxCUvkR2cXFK2N5YU8nqfU4JcA 0jyQnnePww3V/EGF9s6zWSyHh5EE8cb0su/FUNRGvlt0FuAgIMD5JSGKlDa5BqLDXn0Z OdkRTDXd9mxO5d7cEOqdeiu8DnwPZMAKuS95W9PPV/bRtZCi/BzA/blKuvTDZUHjoQ9M BpLLzcFKXNqj8JRS59bVbIm0ZyevFRq/pXFRIull+pdESc7zK0OBiS0w8w18vVPqFhL3 CBfqenEgvlFK5Y6bM1QdltqDsFEvVxhmIUm7y3B2TvLofGMEkbtHFmXqZQX9g/ixCAPx wAcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690486165; x=1691090965; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AW2eisZB8LFflZPdRQNrvlYPitHp5FxyhEVepQEMqXQ=; b=V73V5065q+5CEnYXxqJERJCR1w9pJeh+QlB7bQ6wJU3XspNANYMen4EXupxcI5c9r4 n1FSK9hDJpk1k/NIXnyJFx5l17vLukTeTAYo61bC/xPAZem83QbtqgCBLG6yI5lffgFX basYeP6NEWMFsgalEgDT7FEEtd5jWvbsLGhgrIudAV3FaHW7Mu5xwcdIzmVKGIv3cgWd QYqGjAi2JIFd89wapMXMbEP52fTE1EglMFc7fNvVTPffJVIvNpmuDw5D4fXScT8f5Oop 9riEMG/KA3Qc6YIBKxmtCnrXxL+31MSN5JPfKLh1QV8L1iFOm3/LveGt0ECaoVc0QnlJ DCqg== X-Gm-Message-State: ABy/qLbAwKwylwHvlpfHsPG55O+pqaA5bGmOeNzjTHJ0aF1ErvQ0xzVW tvmB4hgHLA8tT/pBcXAPUdQ= X-Google-Smtp-Source: APBJJlEPyZHdkVefhYzdss+dga/Nt4gNLWh96AEj3P/YXojElMDSN31r1YWeZ+4nhEaFufDP2jN29Q== X-Received: by 2002:a4a:3003:0:b0:566:f2b9:eb86 with SMTP id q3-20020a4a3003000000b00566f2b9eb86mr409326oof.4.1690486164432; Thu, 27 Jul 2023 12:29:24 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-a360-d7ee-0b00-a1d3.res6.spectrum.com. [2603:8081:140c:1a00:a360:d7ee:b00:a1d3]) by smtp.gmail.com with ESMTPSA id f185-20020a4a58c2000000b005658aed310bsm955354oob.15.2023.07.27.12.29.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 12:29:23 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org, jhack@hpe.com Cc: Bob Pearson Subject: [PATCH for-next v3 3/8] RDMA/rxe: Isolate request payload code in a subroutine Date: Thu, 27 Jul 2023 14:28:27 -0500 Message-Id: <20230727192831.65495-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727192831.65495-1-rpearsonhpe@gmail.com> References: <20230727192831.65495-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Isolate the code that fills the payload of a request packet into a subroutine named rxe_init_payload(). Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_req.c | 34 +++++++++++++++++------------ 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 6e9c8da001a4..c92e561b8a0b 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -477,6 +477,25 @@ static void rxe_init_roce_hdrs(struct rxe_qp *qp, struct rxe_send_wqe *wqe, } } +static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe, + struct rxe_pkt_info *pkt, u32 payload) +{ + void *data; + int err = 0; + + if (wqe->wr.send_flags & IB_SEND_INLINE) { + data = &wqe->dma.inline_data[wqe->dma.sge_offset]; + memcpy(payload_addr(pkt), data, payload); + wqe->dma.resid -= payload; + wqe->dma.sge_offset += payload; + } else { + err = copy_data(qp->pd, 0, &wqe->dma, payload_addr(pkt), + payload, RXE_FROM_MR_OBJ); + } + + return err; +} + static struct sk_buff *init_req_packet(struct rxe_qp *qp, struct rxe_av *av, struct rxe_send_wqe *wqe, @@ -513,20 +532,7 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_av *av, return err; if (pkt->mask & RXE_WRITE_OR_SEND_MASK) { - if (wqe->wr.send_flags & IB_SEND_INLINE) { - u8 *tmp = &wqe->dma.inline_data[wqe->dma.sge_offset]; - - memcpy(payload_addr(pkt), tmp, payload); - - wqe->dma.resid -= payload; - wqe->dma.sge_offset += payload; - } else { - err = copy_data(qp->pd, 0, &wqe->dma, - payload_addr(pkt), payload, - RXE_FROM_MR_OBJ); - if (err) - return err; - } + err = rxe_init_payload(qp, wqe, pkt, payload); if (pkt->pad) { pad_addr = payload_addr(pkt) + payload; memset(pad_addr, 0, pkt->pad); From patchwork Thu Jul 27 19:28:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13330614 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9907DC00528 for ; Thu, 27 Jul 2023 19:29:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231483AbjG0T3f (ORCPT ); Thu, 27 Jul 2023 15:29:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232168AbjG0T31 (ORCPT ); Thu, 27 Jul 2023 15:29:27 -0400 Received: from mail-oo1-xc2a.google.com (mail-oo1-xc2a.google.com [IPv6:2607:f8b0:4864:20::c2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46CDC2D76 for ; Thu, 27 Jul 2023 12:29:26 -0700 (PDT) Received: by mail-oo1-xc2a.google.com with SMTP id 006d021491bc7-56597d949b1so995881eaf.1 for ; Thu, 27 Jul 2023 12:29:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690486165; x=1691090965; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NNAoAAHPcHhZFmhIslpmZ58QW7JAmbcneyNRSnwR644=; b=l4nYSebYHFMmkiEE8AlItH8EIBuYA1nzZIAzz92lt29SuO4HtNblR+VWOopcIN3f5i CmdzqSxQwkrVZ8qAg9vwsijD8IKnYtFqFXUxn51PiQlX92ECCHemr2JqPUxSJOEEQ00m 8ZviOymc5cMrjI8B3vta5iycGwJ3cKcCMJGBg/9zdkniQ19ysgWblSQa4O/inFP3y/0m WA8unshFMG1CZDOcXTarrznxE7YIzmyj4C7tX2MYdOH0X6IJjAHPw7sln+toe+e8eqEy chSJGsdcn3M8xXB5CfbfwWk9gmcxxC2haobNmqowGsPBhKcHDJbeSEhON97R2TphueRz jXHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690486165; x=1691090965; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NNAoAAHPcHhZFmhIslpmZ58QW7JAmbcneyNRSnwR644=; b=HS0+at35MkrLGkt2yd0eP0X+lpsKUWKHGD8++RF7jbg4EE7nbZNO1/N3ZoXyXN4rhz eflpEAY36BC7iyoKpo68wCO6L8L14kpyBdqgiEFF6qoPX8pDrpBj07SDAXozjRLBs39/ r+bb6dAXxGMBstHxgreQW1K7+ec2IubHju+fRRFfjoZowQ4EenY+V/uulVMlqqWLUk9t UJWnIENk1LR31kxyIMlwS4ZH46dXsNdJEebjZAa4y4MBWHIZjvPyXoDldamttlV04KFg WRdoIQ+gwuDk+/J0warw3yw6VrjkXS+r/x2rbzQJce8GRFAd0dT80B7YpEmYcpUv5nNR vMhQ== X-Gm-Message-State: ABy/qLaL/dASrTFU00jRywEWNUjpzf/Yo+hO5zQolSyyxC1Qb/pjR5LX 4aplfJbF8dWfjkqV0uPeUAo= X-Google-Smtp-Source: APBJJlHtLDqztWLUEHRr7++g3fOQvL81phv5KzlDPU0SiA3re8eTtMCR5rYYvs+ViaHbFIG4IZNPaA== X-Received: by 2002:a4a:9c0d:0:b0:566:f6a0:87e4 with SMTP id y13-20020a4a9c0d000000b00566f6a087e4mr477677ooj.0.1690486165498; Thu, 27 Jul 2023 12:29:25 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-a360-d7ee-0b00-a1d3.res6.spectrum.com. [2603:8081:140c:1a00:a360:d7ee:b00:a1d3]) by smtp.gmail.com with ESMTPSA id f185-20020a4a58c2000000b005658aed310bsm955354oob.15.2023.07.27.12.29.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 12:29:24 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org, jhack@hpe.com Cc: Bob Pearson Subject: [PATCH for-next v3 4/8] RDMA/rxe: Remove paylen parameter from rxe_init_packet Date: Thu, 27 Jul 2023 14:28:28 -0500 Message-Id: <20230727192831.65495-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727192831.65495-1-rpearsonhpe@gmail.com> References: <20230727192831.65495-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Remove paylen as a parameter to rxe_init_packet() since it is already available in pkt. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 2 +- drivers/infiniband/sw/rxe/rxe_net.c | 7 ++++--- drivers/infiniband/sw/rxe/rxe_req.c | 2 +- drivers/infiniband/sw/rxe/rxe_resp.c | 2 +- 4 files changed, 7 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 666e06a82bc9..cf38f4dcff78 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -90,7 +90,7 @@ void rxe_mw_cleanup(struct rxe_pool_elem *elem); /* rxe_net.c */ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, - int paylen, struct rxe_pkt_info *pkt); + struct rxe_pkt_info *pkt); int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt, struct sk_buff *skb); int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index 0e447420a441..006c2d60f04d 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -511,7 +511,7 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, } struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, - int paylen, struct rxe_pkt_info *pkt) + struct rxe_pkt_info *pkt) { unsigned int hdr_len; struct sk_buff *skb = NULL; @@ -525,7 +525,8 @@ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, hdr_len = ETH_HLEN + sizeof(struct udphdr) + sizeof(struct ipv6hdr); - skb = alloc_skb(paylen + hdr_len + LL_RESERVED_SPACE(ndev), GFP_ATOMIC); + skb = alloc_skb(pkt->paylen + hdr_len + LL_RESERVED_SPACE(ndev), + GFP_ATOMIC); if (unlikely(!skb)) goto out; @@ -541,7 +542,7 @@ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, pkt->rxe = rxe; pkt->port_num = port_num; - pkt->hdr = skb_put(skb, paylen); + pkt->hdr = skb_put(skb, pkt->paylen); pkt->mask |= RXE_GRH_MASK; out: diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index c92e561b8a0b..e444e1f91523 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -511,7 +511,7 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, pkt->pad + RXE_ICRC_SIZE; /* init skb */ - skb = rxe_init_packet(rxe, av, pkt->paylen, pkt); + skb = rxe_init_packet(rxe, av, pkt); if (unlikely(!skb)) return NULL; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index fc2f55329fa2..7e79d3e4d64e 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -775,7 +775,7 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, ack->paylen = rxe_opcode[opcode].length + payload + ack->pad + RXE_ICRC_SIZE; - skb = rxe_init_packet(rxe, &qp->pri_av, ack->paylen, ack); + skb = rxe_init_packet(rxe, &qp->pri_av, ack); if (!skb) return NULL; From patchwork Thu Jul 27 19:28:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13330615 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F374C04A94 for ; Thu, 27 Jul 2023 19:29:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232173AbjG0T3h (ORCPT ); Thu, 27 Jul 2023 15:29:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230428AbjG0T3e (ORCPT ); Thu, 27 Jul 2023 15:29:34 -0400 Received: from mail-oo1-xc2a.google.com (mail-oo1-xc2a.google.com [IPv6:2607:f8b0:4864:20::c2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3FEBC30D7 for ; Thu, 27 Jul 2023 12:29:28 -0700 (PDT) Received: by mail-oo1-xc2a.google.com with SMTP id 006d021491bc7-5636425bf98so783827eaf.1 for ; Thu, 27 Jul 2023 12:29:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690486167; x=1691090967; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QDt+23W+QUHsyKd1ADomZhzoJoGVn9eEM3AnTbteYHY=; b=iTLLi4HCigDWLpNwh5sExmKOQBqUba0w6Z3Xk4TghQO3tPHCobTzRAvmDM86e6+qjI BttxLNGWhsvZhNlJ0ZxyqQ1iuVOcdMLfG4QF8nfTxerRXLl/WMRAJ5MpGL42GzYxs6xJ a4GBpPvJKFZsW4A89/awFIejwfWu8yUMsiUKHah1aXanZm7mSTjufFWkVjE2eHUSCjBL GesxxvrZqaz0BTX2rLX6YkcYt2Fzxbz63zTprAgkUGPG6CZeDjL/b9YwxgbqeH1X+Xo4 sWePNExBTb41oD7fR5cixPQrSssMqdJ5LynOFC8KUk0b+6RS2kMSGctUFE+i59sVuyt3 f8qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690486167; x=1691090967; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QDt+23W+QUHsyKd1ADomZhzoJoGVn9eEM3AnTbteYHY=; b=cccgmezQ6VLEA+6As0SAhvT/jFYUrhn+JRhev14qXfNWRE8bNdhBxicDMLrxnDv+gM gE0A8dZ6v9Amd0ldwnk0v8qfrVae3Im4HSX0gnGdDro1f/Ib1kxI02sCIk2jHYELO8Az m2ytAe604w65TbSCv5bwW79NlkY/nyVdiVJnDKNgoeGfh049XGOi9XDizdcLZEo93D8w 9ydZP7FoFB3UXioivVyhyWFNtrsfHRYYxsjDp9HLij1vBJoE+qppFQDaYvIXtMfyIW3P nB2KJ8xZbg+n/EAeOD519E2ltTMYoFvx4SekXeSrvbHiXgDBtVxMd0sJVl8vstmluwxk TnYA== X-Gm-Message-State: ABy/qLbpmsqV7oS88kIW7O6yHWTQTv7QlJIc5Fv3SlOeDMp0pbUTeH2P Z2Dcvww4YGVyD14MjW+HypijdwMLzT4= X-Google-Smtp-Source: APBJJlHrVqINHn3TMTGNOQMoICmQUhIuC68GJtZ8DHnp96bU8iqFzq3IPLLvDpOskeTfBQmc8PwztQ== X-Received: by 2002:a4a:6558:0:b0:566:fb4c:981f with SMTP id z24-20020a4a6558000000b00566fb4c981fmr476523oog.0.1690486166489; Thu, 27 Jul 2023 12:29:26 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-a360-d7ee-0b00-a1d3.res6.spectrum.com. [2603:8081:140c:1a00:a360:d7ee:b00:a1d3]) by smtp.gmail.com with ESMTPSA id f185-20020a4a58c2000000b005658aed310bsm955354oob.15.2023.07.27.12.29.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 12:29:25 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org, jhack@hpe.com Cc: Bob Pearson Subject: [PATCH for-next v3 5/8] RDMA/rxe: Isolate code to build request packet Date: Thu, 27 Jul 2023 14:28:29 -0500 Message-Id: <20230727192831.65495-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727192831.65495-1-rpearsonhpe@gmail.com> References: <20230727192831.65495-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Isolate the code to build a request packet into a single subroutine called rxe_init_req_packet(). Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_req.c | 127 +++++++++++++--------------- 1 file changed, 60 insertions(+), 67 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index e444e1f91523..27be1a946d62 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -496,14 +496,32 @@ static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe, return err; } -static struct sk_buff *init_req_packet(struct rxe_qp *qp, - struct rxe_av *av, - struct rxe_send_wqe *wqe, - int opcode, u32 payload, - struct rxe_pkt_info *pkt) +static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, + struct rxe_send_wqe *wqe, + int opcode, u32 payload, + struct rxe_pkt_info *pkt) { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); - struct sk_buff *skb; + struct sk_buff *skb = NULL; + struct rxe_av *av; + struct rxe_ah *ah = NULL; + u8 *pad_addr; + int err; + + pkt->rxe = rxe; + pkt->opcode = opcode; + pkt->qp = qp; + pkt->psn = qp->req.psn; + pkt->mask = rxe_opcode[opcode].mask; + pkt->wqe = wqe; + pkt->port_num = 1; + + /* get address vector and address handle for UD qps only */ + av = rxe_get_av(pkt, &ah); + if (unlikely(!av)) { + err = -EINVAL; + goto err_out; + } /* length from start of bth to end of icrc */ pkt->pad = (-payload) & 0x3; @@ -512,31 +530,19 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, /* init skb */ skb = rxe_init_packet(rxe, av, pkt); - if (unlikely(!skb)) - return NULL; + if (unlikely(!skb)) { + err = -ENOMEM; + goto err_out; + } + /* init roce headers */ rxe_init_roce_hdrs(qp, wqe, pkt); - return skb; -} - -static int finish_packet(struct rxe_qp *qp, struct rxe_av *av, - struct rxe_send_wqe *wqe, struct rxe_pkt_info *pkt, - struct sk_buff *skb, u32 payload) -{ - u8 *pad_addr; - int err; - - err = rxe_prepare(av, pkt, skb); - if (err) - return err; - + /* init payload if any */ if (pkt->mask & RXE_WRITE_OR_SEND_MASK) { err = rxe_init_payload(qp, wqe, pkt, payload); - if (pkt->pad) { - pad_addr = payload_addr(pkt) + payload; - memset(pad_addr, 0, pkt->pad); - } + if (unlikely(err)) + goto err_out; } else if (pkt->mask & RXE_FLUSH_MASK) { /* oA19-2: shall have no payload. */ wqe->dma.resid = 0; @@ -547,7 +553,32 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_av *av, wqe->dma.resid -= payload; } - return 0; + /* init pad and icrc */ + if (pkt->pad) { + pad_addr = payload_addr(pkt) + payload; + memset(pad_addr, 0, pkt->pad); + } + + /* init IP and UDP network headers */ + err = rxe_prepare(av, pkt, skb); + if (unlikely(err)) + goto err_out; + + if (ah) + rxe_put(ah); + + return skb; + +err_out: + if (err == -EFAULT) + wqe->status = IB_WC_LOC_PROT_ERR; + else + wqe->status = IB_WC_LOC_QP_OP_ERR; + if (skb) + kfree_skb(skb); + if (ah) + rxe_put(ah); + return NULL; } static void update_wqe_state(struct rxe_qp *qp, @@ -678,7 +709,6 @@ static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe) int rxe_requester(struct rxe_qp *qp) { - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct rxe_pkt_info pkt; struct sk_buff *skb; struct rxe_send_wqe *wqe; @@ -691,8 +721,6 @@ int rxe_requester(struct rxe_qp *qp) struct rxe_send_wqe rollback_wqe; u32 rollback_psn; struct rxe_queue *q = qp->sq.queue; - struct rxe_ah *ah; - struct rxe_av *av; unsigned long flags; spin_lock_irqsave(&qp->state_lock, flags); @@ -804,47 +832,12 @@ int rxe_requester(struct rxe_qp *qp) payload = mtu; } - pkt.rxe = rxe; - pkt.opcode = opcode; - pkt.qp = qp; - pkt.psn = qp->req.psn; - pkt.mask = rxe_opcode[opcode].mask; - pkt.wqe = wqe; - /* save wqe state before we build and send packet */ save_state(wqe, qp, &rollback_wqe, &rollback_psn); - av = rxe_get_av(&pkt, &ah); - if (unlikely(!av)) { - rxe_dbg_qp(qp, "Failed no address vector\n"); - wqe->status = IB_WC_LOC_QP_OP_ERR; - goto err; - } - - skb = init_req_packet(qp, av, wqe, opcode, payload, &pkt); - if (unlikely(!skb)) { - rxe_dbg_qp(qp, "Failed allocating skb\n"); - wqe->status = IB_WC_LOC_QP_OP_ERR; - if (ah) - rxe_put(ah); - goto err; - } - - err = finish_packet(qp, av, wqe, &pkt, skb, payload); - if (unlikely(err)) { - rxe_dbg_qp(qp, "Error during finish packet\n"); - if (err == -EFAULT) - wqe->status = IB_WC_LOC_PROT_ERR; - else - wqe->status = IB_WC_LOC_QP_OP_ERR; - kfree_skb(skb); - if (ah) - rxe_put(ah); + skb = rxe_init_req_packet(qp, wqe, opcode, payload, &pkt); + if (!skb) goto err; - } - - if (ah) - rxe_put(ah); /* update wqe state as though we had sent it */ update_wqe_state(qp, wqe, &pkt); From patchwork Thu Jul 27 19:28:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13330613 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8159AC001DC for ; Thu, 27 Jul 2023 19:29:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232131AbjG0T3g (ORCPT ); Thu, 27 Jul 2023 15:29:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54072 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232208AbjG0T3d (ORCPT ); Thu, 27 Jul 2023 15:29:33 -0400 Received: from mail-oo1-xc2b.google.com (mail-oo1-xc2b.google.com [IPv6:2607:f8b0:4864:20::c2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E3DF30DE for ; Thu, 27 Jul 2023 12:29:28 -0700 (PDT) Received: by mail-oo1-xc2b.google.com with SMTP id 006d021491bc7-5636425bf98so783829eaf.1 for ; Thu, 27 Jul 2023 12:29:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690486167; x=1691090967; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xkhXia8nk+h5EHW4veENIj/UEu81qfF17hc+BGHHcpE=; b=imVTSu6Y+4KnL0vp2R1KdQKcwF94pl0/a/6xUUWrsMe4lIHhahzi/Wr7jFWzy4NF7M FvQp5gVdLtGNORqmLkXAq9P+0ETJd5X6V2WTy48s+o1XGElKFQEorxCMr+N/ZUjYzaK8 hMfCFelMY1f0oqEzy8N0mUYrlXT6YnPZrEy9d/0e4YfzBQ9cYun2gSDhoyUWoJGCSq+E P3RM1Yh+JFtpLX+UjNcsLHNTnNFWGBnZq5vEqpgYP3GVw30DvixUWh7sJbc+k7l0kXQh e0VFO83NLUW6Oc7kaIiLicwMtqYy+0XedL+0u843lMJ86g16KFfVIgU15xEFGUMwSQ2Q NAzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690486167; x=1691090967; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xkhXia8nk+h5EHW4veENIj/UEu81qfF17hc+BGHHcpE=; b=UBydVdTtqsgEa7gBtL79U8UWKc+aIpPU0vGh/8+NjhRMdcYHeIkIil12QWdpYwHbLI /pRFNjyL4yL/J9r++OccXt8gde4wLxrrXDbCEyD6fjsUf/bj1l/uEKnrVB53dS1lGaC0 HxCqO3QUQ4rfVDbqe8C92c8qnnQGUUqSS+OInnNQq0gimNcMSovhUwt2g5EJJTOAM6z0 mx7QXDtQW2jGhEjk2zwAf1/PRSHvQf3SyS2+H5l08IkSosKov2yLKlxTj42+T/jQM8lS 00EsJbcH0UyzCLsuta5vjZAqTaokVFrWyHxVOL9w3EzR+jPvBYI+NEED7yPsoiqcFcF5 PnNg== X-Gm-Message-State: ABy/qLbg9isRW/ana8ZoO0MQBq9a9XlLl/OU9M8tJ2IAwdToy38w3E1m oMrIO/zKnuWZUee62SFSn6I= X-Google-Smtp-Source: APBJJlG9k4v66VSVXJ4sypw8Rli8LoiUK43NCLcnLa3I3I2r1tbUPvFQ8i+p28hWaz/8uzvwjx6wXg== X-Received: by 2002:a4a:351e:0:b0:566:fbfb:6278 with SMTP id l30-20020a4a351e000000b00566fbfb6278mr506727ooa.4.1690486167292; Thu, 27 Jul 2023 12:29:27 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-a360-d7ee-0b00-a1d3.res6.spectrum.com. [2603:8081:140c:1a00:a360:d7ee:b00:a1d3]) by smtp.gmail.com with ESMTPSA id f185-20020a4a58c2000000b005658aed310bsm955354oob.15.2023.07.27.12.29.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 12:29:27 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org, jhack@hpe.com Cc: Bob Pearson Subject: [PATCH for-next v3 6/8] RDMA/rxe: Put fake udp send code in a subroutine Date: Thu, 27 Jul 2023 14:28:30 -0500 Message-Id: <20230727192831.65495-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727192831.65495-1-rpearsonhpe@gmail.com> References: <20230727192831.65495-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Isolate the code that handles the case of an overlong to a subroutine named fake_udp_send(). Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_req.c | 37 ++++++++++++++++------------- 1 file changed, 20 insertions(+), 17 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 27be1a946d62..8423d259f26a 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -707,6 +707,24 @@ static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe) return 0; } +/* C10-93.1.1: If the total sum of all the buffer lengths specified for a + * UD message exceeds the MTU of the port as returned by QueryHCA, the CI + * shall not emit any packets for this message. Further, the CI shall not + * generate an error due to this condition. + */ +static void fake_udp_send(struct rxe_qp *qp, struct rxe_send_wqe *wqe) +{ + wqe->first_psn = qp->req.psn; + wqe->last_psn = qp->req.psn; + qp->req.psn = (qp->req.psn + 1) & BTH_PSN_MASK; + qp->req.opcode = IB_OPCODE_UD_SEND_ONLY; + qp->req.wqe_index = queue_next_index(qp->sq.queue, + qp->req.wqe_index); + wqe->state = wqe_state_done; + wqe->status = IB_WC_SUCCESS; + rxe_run_task(&qp->comp.task); +} + int rxe_requester(struct rxe_qp *qp) { struct rxe_pkt_info pkt; @@ -810,23 +828,8 @@ int rxe_requester(struct rxe_qp *qp) payload = (mask & (RXE_WRITE_OR_SEND_MASK | RXE_ATOMIC_WRITE_MASK)) ? wqe->dma.resid : 0; if (payload > mtu) { - if (qp_type(qp) == IB_QPT_UD) { - /* C10-93.1.1: If the total sum of all the buffer lengths specified for a - * UD message exceeds the MTU of the port as returned by QueryHCA, the CI - * shall not emit any packets for this message. Further, the CI shall not - * generate an error due to this condition. - */ - - /* fake a successful UD send */ - wqe->first_psn = qp->req.psn; - wqe->last_psn = qp->req.psn; - qp->req.psn = (qp->req.psn + 1) & BTH_PSN_MASK; - qp->req.opcode = IB_OPCODE_UD_SEND_ONLY; - qp->req.wqe_index = queue_next_index(qp->sq.queue, - qp->req.wqe_index); - wqe->state = wqe_state_done; - wqe->status = IB_WC_SUCCESS; - rxe_sched_task(&qp->comp.task); + if (unlikely(qp_type(qp) == IB_QPT_UD)) { + fake_udp_send(qp, wqe); goto done; } payload = mtu; From patchwork Thu Jul 27 19:28:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13330616 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEE85C001DC for ; Thu, 27 Jul 2023 19:29:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230428AbjG0T3i (ORCPT ); Thu, 27 Jul 2023 15:29:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232238AbjG0T3f (ORCPT ); Thu, 27 Jul 2023 15:29:35 -0400 Received: from mail-oo1-xc2d.google.com (mail-oo1-xc2d.google.com [IPv6:2607:f8b0:4864:20::c2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A339D30EB for ; Thu, 27 Jul 2023 12:29:29 -0700 (PDT) Received: by mail-oo1-xc2d.google.com with SMTP id 006d021491bc7-55e1ae72dceso945572eaf.3 for ; Thu, 27 Jul 2023 12:29:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690486168; x=1691090968; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+Nc50FVncZ2IVPM6QTujb5QZN24UXhvEl5eqKrdS9c4=; b=ABfBgav1XwUr5fXXota8QbJqoTxtzuiXwbhK/Yeedw0mp6SqP9G3qY7kD9S/gTxO3j LpBo2kozXQzeBOZb+15eDxGD5bMXacNk+eqX8/Ab1+n+NypwRgf4K9e9Fbaa2/F6YN4u 3p4LMcT5a55xFby8JNsyALpMbnFSo1Pi0sUPLwydj2OV53V2x+Ow2bwCUbm0MuooOQj/ d5IrbiI/Nm5dd/x0OMTJz78h56m/ZFxHBHoPmQcheJJ3u8KLYovBtBTuUcjtmgdXG3c0 KYs3/EXORqC52SJE9m2yhFEOsXvHTc68rWqOFLFqleaYdH15KyFsJXa88qpbQ3Oc05Jd /DJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690486168; x=1691090968; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+Nc50FVncZ2IVPM6QTujb5QZN24UXhvEl5eqKrdS9c4=; b=YmDdgdIMifOsFkxrmikXCkPijjopg6XhB1p+GWRswOT4psfiMvoq3NXbjdXd89m0Qy Ef0cliM4gOf3zZsuEvcqNla/5tMi2M5xye8xrIVpup78kOs2GAyCLbHoo6KM38CP2aJJ xCLLUHZ2xCJ6FSS7oytDzIY6XavRnppZnOLYOnqL0fAIjMmI/JGz8aKruj6NI9e0WUOR LNq9/LWC+A2poYWxkgfZozJEv6XVBe0icYBDZMtBGjm+scv0rJdBy1WkHZ39kGLy9fYD KsPc6WjTz32weDZTsGCPJcVHn7f+d6TKWI72oWzSl/W6h3wwxwCNwRilNWRR8OiHYhJ/ 7R0w== X-Gm-Message-State: ABy/qLb57ZZht2oRHKLWNAYVh8Q2IBZ9QggAfoelWi0gVpY6pzQvBgjX TV1GsSsE8NIiLjPCElXxM5GGAaQ7s5s= X-Google-Smtp-Source: APBJJlEAKySnwwtanHi4jovHqwFT/ill+IiFtXOqNADMZpN+BisMz06gotu26t6drkNF5wm2f6cdYw== X-Received: by 2002:a4a:7656:0:b0:567:27f4:8c45 with SMTP id w22-20020a4a7656000000b0056727f48c45mr348876ooe.8.1690486168361; Thu, 27 Jul 2023 12:29:28 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-a360-d7ee-0b00-a1d3.res6.spectrum.com. [2603:8081:140c:1a00:a360:d7ee:b00:a1d3]) by smtp.gmail.com with ESMTPSA id f185-20020a4a58c2000000b005658aed310bsm955354oob.15.2023.07.27.12.29.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 12:29:27 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org, jhack@hpe.com Cc: Bob Pearson Subject: [PATCH for-next v3 7/8] RDMA/rxe: Combine setting pkt info Date: Thu, 27 Jul 2023 14:28:31 -0500 Message-Id: <20230727192831.65495-8-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727192831.65495-1-rpearsonhpe@gmail.com> References: <20230727192831.65495-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move setting some rxe_pkt_info fields in rxe_init_packet() together with the rest of the fields in rxe_init_req_packet() and prepare_ack_packet(). Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_net.c | 6 ------ drivers/infiniband/sw/rxe/rxe_req.c | 4 +++- drivers/infiniband/sw/rxe/rxe_resp.c | 12 ++++++++---- 3 files changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index 006c2d60f04d..94e347a7f386 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -516,7 +516,6 @@ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, unsigned int hdr_len; struct sk_buff *skb = NULL; struct net_device *ndev = rxe->ndev; - const int port_num = 1; if (av->network_type == RXE_NETWORK_TYPE_IPV4) hdr_len = ETH_HLEN + sizeof(struct udphdr) + @@ -540,11 +539,6 @@ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, else skb->protocol = htons(ETH_P_IPV6); - pkt->rxe = rxe; - pkt->port_num = port_num; - pkt->hdr = skb_put(skb, pkt->paylen); - pkt->mask |= RXE_GRH_MASK; - out: return skb; } diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 8423d259f26a..4db1bacdfdb8 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -512,7 +512,7 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, pkt->opcode = opcode; pkt->qp = qp; pkt->psn = qp->req.psn; - pkt->mask = rxe_opcode[opcode].mask; + pkt->mask = rxe_opcode[opcode].mask | RXE_GRH_MASK; pkt->wqe = wqe; pkt->port_num = 1; @@ -535,6 +535,8 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, goto err_out; } + pkt->hdr = skb_put(skb, pkt->paylen); + /* init roce headers */ rxe_init_roce_hdrs(qp, wqe, pkt); diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 7e79d3e4d64e..8a25c56dfd86 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -768,6 +768,13 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, struct sk_buff *skb; int err; + ack->rxe = rxe; + ack->qp = qp; + ack->opcode = opcode; + ack->mask = rxe_opcode[opcode].mask | RXE_GRH_MASK; + ack->psn = psn; + ack->port_num = 1; + /* * allocate packet */ @@ -779,10 +786,7 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, if (!skb) return NULL; - ack->qp = qp; - ack->opcode = opcode; - ack->mask = rxe_opcode[opcode].mask; - ack->psn = psn; + ack->hdr = skb_put(skb, ack->paylen); bth_init(ack, opcode, 0, 0, ack->pad, IB_DEFAULT_PKEY_FULL, qp->attr.dest_qp_num, 0, psn); From patchwork Thu Jul 27 19:28:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13330617 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B10BC00528 for ; Thu, 27 Jul 2023 19:29:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232184AbjG0T3j (ORCPT ); Thu, 27 Jul 2023 15:29:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232249AbjG0T3f (ORCPT ); Thu, 27 Jul 2023 15:29:35 -0400 Received: from mail-oo1-xc2f.google.com (mail-oo1-xc2f.google.com [IPv6:2607:f8b0:4864:20::c2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AEAD23585 for ; Thu, 27 Jul 2023 12:29:30 -0700 (PDT) Received: by mail-oo1-xc2f.google.com with SMTP id 006d021491bc7-55e1ae72dceso945581eaf.3 for ; Thu, 27 Jul 2023 12:29:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690486169; x=1691090969; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MaJEk5vt2+/wifPhSnLjuk98YQwix/8lRPzZb8SRjVY=; b=Q1A0A9K9xG1/+A9wggaii/E9IA5os8hi0QIfYtSF01BQJGV38uVPT79sg1piGLN7jk btxbiel1+TD40nEXHoaPP0yAiiSCg6eIiCEwti8Ni1QZDnoWP4iEpcur2xjt0QWJwqnm gPFk0MeSkjoS6y3yu17nI6yuuk5LUNSrfWpAhybyWRuV1RKCNqBF3cCxkEQEkaiUG8FW /WTbEGVxmen+JmoaCb3WQeKTH0DvKtGZy9N4HL3oA9N/OAh+wCxfvETov8zfLZHRr7aW S8iwOjcEyMXG4MAXVlDyZ5kq/WCosz3g3Mnr9nGmUhgj/27D4Y0k7TA+uN0NuvA0Hzoi PptA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690486169; x=1691090969; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MaJEk5vt2+/wifPhSnLjuk98YQwix/8lRPzZb8SRjVY=; b=RlNRiFK7XUTvTIy3SQgCt80oqMyKrEJnyn7UnDOmk7qvefWFNCToUyu7onP6/cLhry jaMB5BBX1iNJgtzwXYEMNNhHHIDBVPtZatpzEMdfk4Ru7OwRV67x2vjzyWbCgNGShUhh G9VSIttjC/STi+0KPKmnYu5f2gb9WOfiBo5EfqftbMaGF7mBNn3suOFK0ij1+A60drfu d682tvHeGOmzVmMUj130LIzXQRC5BByjTAsK1gSqDW2ZfxeP7WzuD+nJQvaqa/3su/bv Ayhcanr92MnODp30QsxqR/94EGLpRKGSdkDAYkBw9gOPE/56RxqlOUqYnAB5qS6U32cK C9Ww== X-Gm-Message-State: ABy/qLatSwEGjfxXZI7vqdTFpn/nDBl6m+v0NFilxfYe9nngbshZsNST wfpCVzRm4imypd66Tyqp4xA= X-Google-Smtp-Source: APBJJlF7tv3GLRfRKkTz246OamsaFfya1ER8X0oGthWqCjVRU7HKEm07KZCa7Me+AgpZkCEd7Ed8Cg== X-Received: by 2002:a4a:8201:0:b0:566:f8ee:fa67 with SMTP id d1-20020a4a8201000000b00566f8eefa67mr455270oog.0.1690486169156; Thu, 27 Jul 2023 12:29:29 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-a360-d7ee-0b00-a1d3.res6.spectrum.com. [2603:8081:140c:1a00:a360:d7ee:b00:a1d3]) by smtp.gmail.com with ESMTPSA id f185-20020a4a58c2000000b005658aed310bsm955354oob.15.2023.07.27.12.29.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 12:29:28 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org, jhack@hpe.com Cc: Bob Pearson Subject: [PATCH for-next v3 8/8] RDMA/rxe: Move next_opcode to rxe_opcode.c Date: Thu, 27 Jul 2023 14:28:32 -0500 Message-Id: <20230727192831.65495-9-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727192831.65495-1-rpearsonhpe@gmail.com> References: <20230727192831.65495-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Localize opcode specific code to rxe_opcode.c by moving next_opcode() to rxe_next_req_opcode() in rxe_opcode.c. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_opcode.c | 176 ++++++++++++++++++++++++- drivers/infiniband/sw/rxe/rxe_opcode.h | 4 + drivers/infiniband/sw/rxe/rxe_req.c | 173 +----------------------- 3 files changed, 183 insertions(+), 170 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c index 5c0d5c6ffda4..f358b732a751 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.c +++ b/drivers/infiniband/sw/rxe/rxe_opcode.c @@ -5,8 +5,8 @@ */ #include -#include "rxe_opcode.h" -#include "rxe_hdr.h" + +#include "rxe.h" /* useful information about work request opcodes and pkt opcodes in * table form @@ -973,3 +973,175 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { }, }; + +static int next_opcode_rc(int last_opcode, u32 wr_opcode, bool fits) +{ + switch (wr_opcode) { + case IB_WR_RDMA_WRITE: + if (last_opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST || + last_opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_RC_RDMA_WRITE_LAST : + IB_OPCODE_RC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_RC_RDMA_WRITE_ONLY : + IB_OPCODE_RC_RDMA_WRITE_FIRST; + + case IB_WR_RDMA_WRITE_WITH_IMM: + if (last_opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST || + last_opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_RC_RDMA_WRITE_LAST_WITH_IMMEDIATE : + IB_OPCODE_RC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_RC_RDMA_WRITE_ONLY_WITH_IMMEDIATE : + IB_OPCODE_RC_RDMA_WRITE_FIRST; + + case IB_WR_SEND: + if (last_opcode == IB_OPCODE_RC_SEND_FIRST || + last_opcode == IB_OPCODE_RC_SEND_MIDDLE) + return fits ? + IB_OPCODE_RC_SEND_LAST : + IB_OPCODE_RC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_RC_SEND_ONLY : + IB_OPCODE_RC_SEND_FIRST; + + case IB_WR_SEND_WITH_IMM: + if (last_opcode == IB_OPCODE_RC_SEND_FIRST || + last_opcode == IB_OPCODE_RC_SEND_MIDDLE) + return fits ? + IB_OPCODE_RC_SEND_LAST_WITH_IMMEDIATE : + IB_OPCODE_RC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_RC_SEND_ONLY_WITH_IMMEDIATE : + IB_OPCODE_RC_SEND_FIRST; + + case IB_WR_SEND_WITH_INV: + if (last_opcode == IB_OPCODE_RC_SEND_FIRST || + last_opcode == IB_OPCODE_RC_SEND_MIDDLE) + return fits ? + IB_OPCODE_RC_SEND_LAST_WITH_INVALIDATE : + IB_OPCODE_RC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_RC_SEND_ONLY_WITH_INVALIDATE : + IB_OPCODE_RC_SEND_FIRST; + + case IB_WR_FLUSH: + return IB_OPCODE_RC_FLUSH; + + case IB_WR_RDMA_READ: + return IB_OPCODE_RC_RDMA_READ_REQUEST; + + case IB_WR_ATOMIC_CMP_AND_SWP: + return IB_OPCODE_RC_COMPARE_SWAP; + + case IB_WR_ATOMIC_FETCH_AND_ADD: + return IB_OPCODE_RC_FETCH_ADD; + + case IB_WR_ATOMIC_WRITE: + return IB_OPCODE_RC_ATOMIC_WRITE; + + case IB_WR_REG_MR: + case IB_WR_LOCAL_INV: + return OPCODE_NONE; /* not used */ + } + + return -EINVAL; +} + +static int next_opcode_uc(int last_opcode, u32 wr_opcode, bool fits) +{ + switch (wr_opcode) { + case IB_WR_RDMA_WRITE: + if (last_opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST || + last_opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_UC_RDMA_WRITE_LAST : + IB_OPCODE_UC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_UC_RDMA_WRITE_ONLY : + IB_OPCODE_UC_RDMA_WRITE_FIRST; + + case IB_WR_RDMA_WRITE_WITH_IMM: + if (last_opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST || + last_opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_UC_RDMA_WRITE_LAST_WITH_IMMEDIATE : + IB_OPCODE_UC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_UC_RDMA_WRITE_ONLY_WITH_IMMEDIATE : + IB_OPCODE_UC_RDMA_WRITE_FIRST; + + case IB_WR_SEND: + if (last_opcode == IB_OPCODE_UC_SEND_FIRST || + last_opcode == IB_OPCODE_UC_SEND_MIDDLE) + return fits ? + IB_OPCODE_UC_SEND_LAST : + IB_OPCODE_UC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_UC_SEND_ONLY : + IB_OPCODE_UC_SEND_FIRST; + + case IB_WR_SEND_WITH_IMM: + if (last_opcode == IB_OPCODE_UC_SEND_FIRST || + last_opcode == IB_OPCODE_UC_SEND_MIDDLE) + return fits ? + IB_OPCODE_UC_SEND_LAST_WITH_IMMEDIATE : + IB_OPCODE_UC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_UC_SEND_ONLY_WITH_IMMEDIATE : + IB_OPCODE_UC_SEND_FIRST; + } + + return -EINVAL; +} + +/* compute next requester packet opcode + * assumes caller is following the sequence rules + */ +int next_req_opcode(struct rxe_qp *qp, int resid, u32 wr_opcode) +{ + int fits = resid <= qp->mtu; + int last_opcode = qp->req.opcode; + int ret; + + switch (qp_type(qp)) { + case IB_QPT_RC: + ret = next_opcode_rc(last_opcode, wr_opcode, fits); + break; + case IB_QPT_UC: + ret = next_opcode_uc(last_opcode, wr_opcode, fits); + break; + case IB_QPT_UD: + case IB_QPT_GSI: + switch (wr_opcode) { + case IB_WR_SEND: + ret = IB_OPCODE_UD_SEND_ONLY; + break; + case IB_WR_SEND_WITH_IMM: + ret = IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE; + break; + default: + ret = -EINVAL; + break; + } + break; + default: + ret = -EINVAL; + break; + } + + if (ret == -EINVAL) + rxe_err_qp(qp, "unable to compute next opcode"); + return ret; +} diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.h b/drivers/infiniband/sw/rxe/rxe_opcode.h index 5686b691d6b8..61030d9c299f 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.h +++ b/drivers/infiniband/sw/rxe/rxe_opcode.h @@ -7,6 +7,8 @@ #ifndef RXE_OPCODE_H #define RXE_OPCODE_H +struct rxe_qp; + /* * contains header bit mask definitions and header lengths * declaration of the rxe_opcode_info struct and @@ -108,4 +110,6 @@ struct rxe_opcode_info { extern struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE]; +int next_req_opcode(struct rxe_qp *qp, int resid, u32 wr_opcode); + #endif /* RXE_OPCODE_H */ diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 4db1bacdfdb8..51b781ac2844 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -11,9 +11,6 @@ #include "rxe_loc.h" #include "rxe_queue.h" -static int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, - u32 opcode); - static inline void retry_first_write_send(struct rxe_qp *qp, struct rxe_send_wqe *wqe, int npsn) { @@ -23,8 +20,8 @@ static inline void retry_first_write_send(struct rxe_qp *qp, int to_send = (wqe->dma.resid > qp->mtu) ? qp->mtu : wqe->dma.resid; - qp->req.opcode = next_opcode(qp, wqe, - wqe->wr.opcode); + qp->req.opcode = next_req_opcode(qp, wqe->dma.resid, + wqe->wr.opcode); if (wqe->wr.send_flags & IB_SEND_INLINE) { wqe->dma.resid -= to_send; @@ -51,7 +48,7 @@ static void req_retry(struct rxe_qp *qp) qp->req.wqe_index = cons; qp->req.psn = qp->comp.psn; - qp->req.opcode = -1; + qp->req.opcode = OPCODE_NONE; for (wqe_index = cons; wqe_index != prod; wqe_index = queue_next_index(q, wqe_index)) { @@ -221,166 +218,6 @@ static int rxe_wqe_is_fenced(struct rxe_qp *qp, struct rxe_send_wqe *wqe) atomic_read(&qp->req.rd_atomic) != qp->attr.max_rd_atomic; } -static int next_opcode_rc(struct rxe_qp *qp, u32 opcode, int fits) -{ - switch (opcode) { - case IB_WR_RDMA_WRITE: - if (qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST || - qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE) - return fits ? - IB_OPCODE_RC_RDMA_WRITE_LAST : - IB_OPCODE_RC_RDMA_WRITE_MIDDLE; - else - return fits ? - IB_OPCODE_RC_RDMA_WRITE_ONLY : - IB_OPCODE_RC_RDMA_WRITE_FIRST; - - case IB_WR_RDMA_WRITE_WITH_IMM: - if (qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST || - qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE) - return fits ? - IB_OPCODE_RC_RDMA_WRITE_LAST_WITH_IMMEDIATE : - IB_OPCODE_RC_RDMA_WRITE_MIDDLE; - else - return fits ? - IB_OPCODE_RC_RDMA_WRITE_ONLY_WITH_IMMEDIATE : - IB_OPCODE_RC_RDMA_WRITE_FIRST; - - case IB_WR_SEND: - if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) - return fits ? - IB_OPCODE_RC_SEND_LAST : - IB_OPCODE_RC_SEND_MIDDLE; - else - return fits ? - IB_OPCODE_RC_SEND_ONLY : - IB_OPCODE_RC_SEND_FIRST; - - case IB_WR_SEND_WITH_IMM: - if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) - return fits ? - IB_OPCODE_RC_SEND_LAST_WITH_IMMEDIATE : - IB_OPCODE_RC_SEND_MIDDLE; - else - return fits ? - IB_OPCODE_RC_SEND_ONLY_WITH_IMMEDIATE : - IB_OPCODE_RC_SEND_FIRST; - - case IB_WR_FLUSH: - return IB_OPCODE_RC_FLUSH; - - case IB_WR_RDMA_READ: - return IB_OPCODE_RC_RDMA_READ_REQUEST; - - case IB_WR_ATOMIC_CMP_AND_SWP: - return IB_OPCODE_RC_COMPARE_SWAP; - - case IB_WR_ATOMIC_FETCH_AND_ADD: - return IB_OPCODE_RC_FETCH_ADD; - - case IB_WR_SEND_WITH_INV: - if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) - return fits ? IB_OPCODE_RC_SEND_LAST_WITH_INVALIDATE : - IB_OPCODE_RC_SEND_MIDDLE; - else - return fits ? IB_OPCODE_RC_SEND_ONLY_WITH_INVALIDATE : - IB_OPCODE_RC_SEND_FIRST; - - case IB_WR_ATOMIC_WRITE: - return IB_OPCODE_RC_ATOMIC_WRITE; - - case IB_WR_REG_MR: - case IB_WR_LOCAL_INV: - return opcode; - } - - return -EINVAL; -} - -static int next_opcode_uc(struct rxe_qp *qp, u32 opcode, int fits) -{ - switch (opcode) { - case IB_WR_RDMA_WRITE: - if (qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST || - qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE) - return fits ? - IB_OPCODE_UC_RDMA_WRITE_LAST : - IB_OPCODE_UC_RDMA_WRITE_MIDDLE; - else - return fits ? - IB_OPCODE_UC_RDMA_WRITE_ONLY : - IB_OPCODE_UC_RDMA_WRITE_FIRST; - - case IB_WR_RDMA_WRITE_WITH_IMM: - if (qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST || - qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE) - return fits ? - IB_OPCODE_UC_RDMA_WRITE_LAST_WITH_IMMEDIATE : - IB_OPCODE_UC_RDMA_WRITE_MIDDLE; - else - return fits ? - IB_OPCODE_UC_RDMA_WRITE_ONLY_WITH_IMMEDIATE : - IB_OPCODE_UC_RDMA_WRITE_FIRST; - - case IB_WR_SEND: - if (qp->req.opcode == IB_OPCODE_UC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_UC_SEND_MIDDLE) - return fits ? - IB_OPCODE_UC_SEND_LAST : - IB_OPCODE_UC_SEND_MIDDLE; - else - return fits ? - IB_OPCODE_UC_SEND_ONLY : - IB_OPCODE_UC_SEND_FIRST; - - case IB_WR_SEND_WITH_IMM: - if (qp->req.opcode == IB_OPCODE_UC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_UC_SEND_MIDDLE) - return fits ? - IB_OPCODE_UC_SEND_LAST_WITH_IMMEDIATE : - IB_OPCODE_UC_SEND_MIDDLE; - else - return fits ? - IB_OPCODE_UC_SEND_ONLY_WITH_IMMEDIATE : - IB_OPCODE_UC_SEND_FIRST; - } - - return -EINVAL; -} - -static int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, - u32 opcode) -{ - int fits = (wqe->dma.resid <= qp->mtu); - - switch (qp_type(qp)) { - case IB_QPT_RC: - return next_opcode_rc(qp, opcode, fits); - - case IB_QPT_UC: - return next_opcode_uc(qp, opcode, fits); - - case IB_QPT_UD: - case IB_QPT_GSI: - switch (opcode) { - case IB_WR_SEND: - return IB_OPCODE_UD_SEND_ONLY; - - case IB_WR_SEND_WITH_IMM: - return IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE; - } - break; - - default: - break; - } - - return -EINVAL; -} - static inline int check_init_depth(struct rxe_qp *qp, struct rxe_send_wqe *wqe) { int depth; @@ -761,7 +598,7 @@ int rxe_requester(struct rxe_qp *qp) if (unlikely(qp_state(qp) == IB_QPS_RESET)) { qp->req.wqe_index = queue_get_consumer(q, QUEUE_TYPE_FROM_CLIENT); - qp->req.opcode = -1; + qp->req.opcode = OPCODE_NONE; qp->req.need_rd_atomic = 0; qp->req.wait_psn = 0; qp->req.need_retry = 0; @@ -813,7 +650,7 @@ int rxe_requester(struct rxe_qp *qp) goto exit; } - opcode = next_opcode(qp, wqe, wqe->wr.opcode); + opcode = next_req_opcode(qp, wqe->dma.resid, wqe->wr.opcode); if (unlikely(opcode < 0)) { wqe->status = IB_WC_LOC_QP_OP_ERR; goto err;