From patchwork Sat Sep 17 03:10:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12978997 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5E71ECAAA1 for ; Sat, 17 Sep 2022 03:11:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229542AbiIQDLr (ORCPT ); Fri, 16 Sep 2022 23:11:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229722AbiIQDLS (ORCPT ); Fri, 16 Sep 2022 23:11:18 -0400 Received: from mail-oa1-x34.google.com (mail-oa1-x34.google.com [IPv6:2001:4860:4864:20::34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2EC247FE76 for ; Fri, 16 Sep 2022 20:11:09 -0700 (PDT) Received: by mail-oa1-x34.google.com with SMTP id 586e51a60fabf-127ba06d03fso54754294fac.3 for ; Fri, 16 Sep 2022 20:11:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=N+HFvYsN9f3IreSqbp8SrkkQdK4b4GU/q0s3J8TRGp4=; b=e+GMJstFyyPVmE2O0w02BC/cy7JCjPyFsej4DLFDxCYd0UOvTCIznNefj1HpykAsbb py/XpLRjqY2Zl07ooLDt3/ASBQ5mH3S7KLbMRarAQOF36O2X5GtqJeXYKOrZq6CE57xE 0LTgHJDMMqejLSDjgKuIOlWWnwdqJXVf9XmkK6CJN0pbMfcuaa4Qm5BO3CqGy7ehklc2 l8q6QyLGfTkJBcH8XzO3oYzhg11qauqs3njZi3oJTQGgBsXyStpRitDSrrxayKksSYd/ mYdAIWbEOswUAY68UsZcFs2NiOJSGcEcLbDG51XRsnbHEKA4zr5DjQ/u5yPp6t5+qHqo lMAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=N+HFvYsN9f3IreSqbp8SrkkQdK4b4GU/q0s3J8TRGp4=; b=s+GIVInrKS4xJIMcgs0/sH+UrRfYdDlR9xp4R8xPqdo+f1YGLjhAW+OgmYgsDrrCqg 9E6JWt3aMR7ra3cNOSXKZtMVYv8Sy6JUUxq8A2AUsVn+SwAvim7HrFDejlnQz2kNbhDG zmghCqYP4s0E99Mz9d29oYHE4G6FD+22VFKZ5eRDe30rX5+ybfGtWm9afPGqxk5hzDwC lt1pTjbKrZU8rXkHyphB4SCYnlGvp3mE8tkFQjPLNlUkK95SlB4FPR6spzR/MYEoyu6j WlYVfdQJo3w9WBOklwG208lK52uk3u1yB5fMiW0Xz5cUE3yyGORtvfKlLKSKZJnXQIod DuWQ== X-Gm-Message-State: ACgBeo06Fs9KUbjmXxT3JrdSo7zRd5EEOEHSrk9cN6zOBL1vMSnKofgw Pcs6jaIMuD9xuSF13skJlBA= X-Google-Smtp-Source: AA6agR6OfM+EJrcXSlI2ueBkFDBWnuPtJqY0u9xxlYeX4UpjZs0Jb/bztLTecIEbhwvSRNsZ866TZQ== X-Received: by 2002:a05:6870:5581:b0:11e:300:8189 with SMTP id n1-20020a056870558100b0011e03008189mr10191327oao.199.1663384268406; Fri, 16 Sep 2022 20:11:08 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id q17-20020a4a6c11000000b00475f26931c8sm921422ooc.13.2022.09.16.20.11.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:11:08 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 01/13] RDMA/rxe: Replace START->FIRST, END->LAST Date: Fri, 16 Sep 2022 22:10:52 -0500 Message-Id: <20220917031104.21222-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031104.21222-1-rpearsonhpe@gmail.com> References: <20220917031104.21222-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Replace RXE_START_MASK by RXE_FIRST_MASK, RXE_END_MASK by RXE_LAST_MASK and add RXE_ONLY_MASK = FIRST | LAST to match normal IBA usage. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 6 +- drivers/infiniband/sw/rxe/rxe_net.c | 2 +- drivers/infiniband/sw/rxe/rxe_opcode.c | 143 +++++++++++-------------- drivers/infiniband/sw/rxe/rxe_opcode.h | 5 +- drivers/infiniband/sw/rxe/rxe_req.c | 10 +- drivers/infiniband/sw/rxe/rxe_resp.c | 4 +- 6 files changed, 76 insertions(+), 94 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index fb0c008af78c..1f10ae4a35d5 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -221,7 +221,7 @@ static inline enum comp_state check_ack(struct rxe_qp *qp, switch (qp->comp.opcode) { case -1: /* Will catch all *_ONLY cases. */ - if (!(mask & RXE_START_MASK)) + if (!(mask & RXE_FIRST_MASK)) return COMPST_ERROR; break; @@ -354,7 +354,7 @@ static inline enum comp_state do_read(struct rxe_qp *qp, return COMPST_ERROR; } - if (wqe->dma.resid == 0 && (pkt->mask & RXE_END_MASK)) + if (wqe->dma.resid == 0 && (pkt->mask & RXE_LAST_MASK)) return COMPST_COMP_ACK; return COMPST_UPDATE_COMP; @@ -636,7 +636,7 @@ int rxe_completer(void *arg) break; case COMPST_UPDATE_COMP: - if (pkt->mask & RXE_END_MASK) + if (pkt->mask & RXE_LAST_MASK) qp->comp.opcode = -1; else qp->comp.opcode = pkt->opcode; diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index c53f4529f098..d46190ad082f 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -428,7 +428,7 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, } if ((qp_type(qp) != IB_QPT_RC) && - (pkt->mask & RXE_END_MASK)) { + (pkt->mask & RXE_LAST_MASK)) { pkt->wqe->state = wqe_state_done; rxe_run_task(&qp->comp.task, 1); } diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c index d4ba4d506f17..0ea587c15931 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.c +++ b/drivers/infiniband/sw/rxe/rxe_opcode.c @@ -107,7 +107,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_SEND_FIRST] = { .name = "IB_OPCODE_RC_SEND_FIRST", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_RWR_MASK | - RXE_SEND_MASK | RXE_START_MASK, + RXE_SEND_MASK | RXE_FIRST_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -127,7 +127,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_SEND_LAST] = { .name = "IB_OPCODE_RC_SEND_LAST", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | - RXE_SEND_MASK | RXE_END_MASK, + RXE_SEND_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -137,7 +137,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_SEND_LAST_WITH_IMMEDIATE] = { .name = "IB_OPCODE_RC_SEND_LAST_WITH_IMMEDIATE", .mask = RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_COMP_MASK | RXE_SEND_MASK | RXE_END_MASK, + RXE_COMP_MASK | RXE_SEND_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES, .offset = { [RXE_BTH] = 0, @@ -149,8 +149,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_SEND_ONLY] = { .name = "IB_OPCODE_RC_SEND_ONLY", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | - RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_RWR_MASK | RXE_SEND_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -161,7 +160,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_RC_SEND_ONLY_WITH_IMMEDIATE", .mask = RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES, .offset = { [RXE_BTH] = 0, @@ -173,7 +172,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_WRITE_FIRST] = { .name = "IB_OPCODE_RC_RDMA_WRITE_FIRST", .mask = RXE_RETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_START_MASK, + RXE_WRITE_MASK | RXE_FIRST_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -195,7 +194,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_WRITE_LAST] = { .name = "IB_OPCODE_RC_RDMA_WRITE_LAST", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_END_MASK, + RXE_LAST_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -206,7 +205,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_RC_RDMA_WRITE_LAST_WITH_IMMEDIATE", .mask = RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | RXE_COMP_MASK | RXE_RWR_MASK | - RXE_END_MASK, + RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES, .offset = { [RXE_BTH] = 0, @@ -218,8 +217,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_WRITE_ONLY] = { .name = "IB_OPCODE_RC_RDMA_WRITE_ONLY", .mask = RXE_RETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_START_MASK | - RXE_END_MASK, + RXE_WRITE_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -231,9 +229,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_WRITE_ONLY_WITH_IMMEDIATE] = { .name = "IB_OPCODE_RC_RDMA_WRITE_ONLY_WITH_IMMEDIATE", .mask = RXE_RETH_MASK | RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | - RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_COMP_MASK | RXE_RWR_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_COMP_MASK | + RXE_RWR_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -248,7 +245,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_READ_REQUEST] = { .name = "IB_OPCODE_RC_RDMA_READ_REQUEST", .mask = RXE_RETH_MASK | RXE_REQ_MASK | RXE_READ_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -260,7 +257,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST] = { .name = "IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST", .mask = RXE_AETH_MASK | RXE_PAYLOAD_MASK | RXE_ACK_MASK | - RXE_START_MASK, + RXE_FIRST_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -281,7 +278,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_READ_RESPONSE_LAST] = { .name = "IB_OPCODE_RC_RDMA_READ_RESPONSE_LAST", .mask = RXE_AETH_MASK | RXE_PAYLOAD_MASK | RXE_ACK_MASK | - RXE_END_MASK, + RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -293,7 +290,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_READ_RESPONSE_ONLY] = { .name = "IB_OPCODE_RC_RDMA_READ_RESPONSE_ONLY", .mask = RXE_AETH_MASK | RXE_PAYLOAD_MASK | RXE_ACK_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -304,8 +301,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { }, [IB_OPCODE_RC_ACKNOWLEDGE] = { .name = "IB_OPCODE_RC_ACKNOWLEDGE", - .mask = RXE_AETH_MASK | RXE_ACK_MASK | RXE_START_MASK | - RXE_END_MASK, + .mask = RXE_AETH_MASK | RXE_ACK_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -317,7 +313,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_ATOMIC_ACKNOWLEDGE] = { .name = "IB_OPCODE_RC_ATOMIC_ACKNOWLEDGE", .mask = RXE_AETH_MASK | RXE_ATMACK_MASK | RXE_ACK_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_ATMACK_BYTES + RXE_AETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -332,7 +328,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_COMPARE_SWAP] = { .name = "IB_OPCODE_RC_COMPARE_SWAP", .mask = RXE_ATMETH_MASK | RXE_REQ_MASK | RXE_ATOMIC_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_ATMETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -344,7 +340,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_FETCH_ADD] = { .name = "IB_OPCODE_RC_FETCH_ADD", .mask = RXE_ATMETH_MASK | RXE_REQ_MASK | RXE_ATOMIC_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_ATMETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -356,7 +352,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_SEND_LAST_WITH_INVALIDATE] = { .name = "IB_OPCODE_RC_SEND_LAST_WITH_INVALIDATE", .mask = RXE_IETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_COMP_MASK | RXE_SEND_MASK | RXE_END_MASK, + RXE_COMP_MASK | RXE_SEND_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -369,7 +365,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_RC_SEND_ONLY_INV", .mask = RXE_IETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK | - RXE_END_MASK | RXE_START_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -383,7 +379,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_SEND_FIRST] = { .name = "IB_OPCODE_UC_SEND_FIRST", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_RWR_MASK | - RXE_SEND_MASK | RXE_START_MASK, + RXE_SEND_MASK | RXE_FIRST_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -403,7 +399,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_SEND_LAST] = { .name = "IB_OPCODE_UC_SEND_LAST", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | - RXE_SEND_MASK | RXE_END_MASK, + RXE_SEND_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -413,7 +409,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_SEND_LAST_WITH_IMMEDIATE] = { .name = "IB_OPCODE_UC_SEND_LAST_WITH_IMMEDIATE", .mask = RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_COMP_MASK | RXE_SEND_MASK | RXE_END_MASK, + RXE_COMP_MASK | RXE_SEND_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES, .offset = { [RXE_BTH] = 0, @@ -425,8 +421,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_SEND_ONLY] = { .name = "IB_OPCODE_UC_SEND_ONLY", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | - RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_RWR_MASK | RXE_SEND_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -437,7 +432,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_UC_SEND_ONLY_WITH_IMMEDIATE", .mask = RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES, .offset = { [RXE_BTH] = 0, @@ -449,7 +444,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_RDMA_WRITE_FIRST] = { .name = "IB_OPCODE_UC_RDMA_WRITE_FIRST", .mask = RXE_RETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_START_MASK, + RXE_WRITE_MASK | RXE_FIRST_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -471,7 +466,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_RDMA_WRITE_LAST] = { .name = "IB_OPCODE_UC_RDMA_WRITE_LAST", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_END_MASK, + RXE_LAST_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -482,7 +477,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_UC_RDMA_WRITE_LAST_WITH_IMMEDIATE", .mask = RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | RXE_COMP_MASK | RXE_RWR_MASK | - RXE_END_MASK, + RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES, .offset = { [RXE_BTH] = 0, @@ -494,8 +489,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_RDMA_WRITE_ONLY] = { .name = "IB_OPCODE_UC_RDMA_WRITE_ONLY", .mask = RXE_RETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_START_MASK | - RXE_END_MASK, + RXE_WRITE_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -507,9 +501,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_RDMA_WRITE_ONLY_WITH_IMMEDIATE] = { .name = "IB_OPCODE_UC_RDMA_WRITE_ONLY_WITH_IMMEDIATE", .mask = RXE_RETH_MASK | RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | - RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_COMP_MASK | RXE_RWR_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_COMP_MASK | + RXE_RWR_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -527,7 +520,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_RD_SEND_FIRST", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK, + RXE_FIRST_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -542,8 +535,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_SEND_MIDDLE] = { .name = "IB_OPCODE_RD_SEND_MIDDLE", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_PAYLOAD_MASK | - RXE_REQ_MASK | RXE_SEND_MASK | - RXE_MIDDLE_MASK, + RXE_REQ_MASK | RXE_SEND_MASK | RXE_MIDDLE_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -559,7 +551,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_RD_SEND_LAST", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_SEND_MASK | - RXE_END_MASK, + RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -574,9 +566,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_SEND_LAST_WITH_IMMEDIATE] = { .name = "IB_OPCODE_RD_SEND_LAST_WITH_IMMEDIATE", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_IMMDT_MASK | - RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_COMP_MASK | RXE_SEND_MASK | - RXE_END_MASK, + RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | + RXE_SEND_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -597,7 +588,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_RD_SEND_ONLY", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | - RXE_SEND_MASK | RXE_START_MASK | RXE_END_MASK, + RXE_SEND_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -612,9 +603,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_SEND_ONLY_WITH_IMMEDIATE] = { .name = "IB_OPCODE_RD_SEND_ONLY_WITH_IMMEDIATE", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_IMMDT_MASK | - RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | + RXE_RWR_MASK | RXE_SEND_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -634,8 +624,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_WRITE_FIRST] = { .name = "IB_OPCODE_RD_RDMA_WRITE_FIRST", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_RETH_MASK | - RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_START_MASK, + RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | + RXE_FIRST_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -655,8 +645,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_WRITE_MIDDLE] = { .name = "IB_OPCODE_RD_RDMA_WRITE_MIDDLE", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_PAYLOAD_MASK | - RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_MIDDLE_MASK, + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_MIDDLE_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -671,8 +660,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_WRITE_LAST] = { .name = "IB_OPCODE_RD_RDMA_WRITE_LAST", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_PAYLOAD_MASK | - RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_END_MASK, + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -687,9 +675,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_WRITE_LAST_WITH_IMMEDIATE] = { .name = "IB_OPCODE_RD_RDMA_WRITE_LAST_WITH_IMMEDIATE", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_IMMDT_MASK | - RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_COMP_MASK | RXE_RWR_MASK | - RXE_END_MASK, + RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | + RXE_COMP_MASK | RXE_RWR_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -709,9 +696,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_WRITE_ONLY] = { .name = "IB_OPCODE_RD_RDMA_WRITE_ONLY", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_RETH_MASK | - RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_START_MASK | - RXE_END_MASK, + RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -731,10 +717,9 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_WRITE_ONLY_WITH_IMMEDIATE] = { .name = "IB_OPCODE_RD_RDMA_WRITE_ONLY_WITH_IMMEDIATE", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_RETH_MASK | - RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | - RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_COMP_MASK | RXE_RWR_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_WRITE_MASK | RXE_COMP_MASK | RXE_RWR_MASK | + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_RETH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -759,8 +744,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_READ_REQUEST] = { .name = "IB_OPCODE_RD_RDMA_READ_REQUEST", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_RETH_MASK | - RXE_REQ_MASK | RXE_READ_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_REQ_MASK | RXE_READ_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -779,9 +763,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { }, [IB_OPCODE_RD_RDMA_READ_RESPONSE_FIRST] = { .name = "IB_OPCODE_RD_RDMA_READ_RESPONSE_FIRST", - .mask = RXE_RDETH_MASK | RXE_AETH_MASK | - RXE_PAYLOAD_MASK | RXE_ACK_MASK | - RXE_START_MASK, + .mask = RXE_RDETH_MASK | RXE_AETH_MASK | RXE_PAYLOAD_MASK | + RXE_ACK_MASK | RXE_FIRST_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -808,7 +791,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_READ_RESPONSE_LAST] = { .name = "IB_OPCODE_RD_RDMA_READ_RESPONSE_LAST", .mask = RXE_RDETH_MASK | RXE_AETH_MASK | RXE_PAYLOAD_MASK | - RXE_ACK_MASK | RXE_END_MASK, + RXE_ACK_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -823,7 +806,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_READ_RESPONSE_ONLY] = { .name = "IB_OPCODE_RD_RDMA_READ_RESPONSE_ONLY", .mask = RXE_RDETH_MASK | RXE_AETH_MASK | RXE_PAYLOAD_MASK | - RXE_ACK_MASK | RXE_START_MASK | RXE_END_MASK, + RXE_ACK_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -838,7 +821,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_ACKNOWLEDGE] = { .name = "IB_OPCODE_RD_ACKNOWLEDGE", .mask = RXE_RDETH_MASK | RXE_AETH_MASK | RXE_ACK_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -850,7 +833,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_ATOMIC_ACKNOWLEDGE] = { .name = "IB_OPCODE_RD_ATOMIC_ACKNOWLEDGE", .mask = RXE_RDETH_MASK | RXE_AETH_MASK | RXE_ATMACK_MASK | - RXE_ACK_MASK | RXE_START_MASK | RXE_END_MASK, + RXE_ACK_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_ATMACK_BYTES + RXE_AETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -866,8 +849,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_COMPARE_SWAP] = { .name = "RD_COMPARE_SWAP", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_ATMETH_MASK | - RXE_REQ_MASK | RXE_ATOMIC_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_REQ_MASK | RXE_ATOMIC_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_ATMETH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -887,8 +869,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_FETCH_ADD] = { .name = "IB_OPCODE_RD_FETCH_ADD", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_ATMETH_MASK | - RXE_REQ_MASK | RXE_ATOMIC_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_REQ_MASK | RXE_ATOMIC_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_ATMETH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -911,7 +892,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_UD_SEND_ONLY", .mask = RXE_DETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -924,7 +905,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE", .mask = RXE_DETH_MASK | RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | - RXE_SEND_MASK | RXE_START_MASK | RXE_END_MASK, + RXE_SEND_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_DETH_BYTES, .offset = { [RXE_BTH] = 0, diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.h b/drivers/infiniband/sw/rxe/rxe_opcode.h index 8f9aaaf260f2..d2b6a8232e92 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.h +++ b/drivers/infiniband/sw/rxe/rxe_opcode.h @@ -75,9 +75,10 @@ enum rxe_hdr_mask { RXE_RWR_MASK = BIT(NUM_HDR_TYPES + 6), RXE_COMP_MASK = BIT(NUM_HDR_TYPES + 7), - RXE_START_MASK = BIT(NUM_HDR_TYPES + 8), + RXE_FIRST_MASK = BIT(NUM_HDR_TYPES + 8), RXE_MIDDLE_MASK = BIT(NUM_HDR_TYPES + 9), - RXE_END_MASK = BIT(NUM_HDR_TYPES + 10), + RXE_LAST_MASK = BIT(NUM_HDR_TYPES + 10), + RXE_ONLY_MASK = RXE_FIRST_MASK | RXE_LAST_MASK, RXE_LOOPBACK_MASK = BIT(NUM_HDR_TYPES + 12), diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index f63771207970..e136abc802af 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -403,7 +403,7 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, /* init bth */ solicited = (ibwr->send_flags & IB_SEND_SOLICITED) && - (pkt->mask & RXE_END_MASK) && + (pkt->mask & RXE_LAST_MASK) && ((pkt->mask & (RXE_SEND_MASK)) || (pkt->mask & (RXE_WRITE_MASK | RXE_IMMDT_MASK)) == (RXE_WRITE_MASK | RXE_IMMDT_MASK)); @@ -411,7 +411,7 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, qp_num = (pkt->mask & RXE_DETH_MASK) ? ibwr->wr.ud.remote_qpn : qp->attr.dest_qp_num; - ack_req = ((pkt->mask & RXE_END_MASK) || + ack_req = ((pkt->mask & RXE_LAST_MASK) || (qp->req.noack_pkts++ > RXE_MAX_PKT_PER_ACK)); if (ack_req) qp->req.noack_pkts = 0; @@ -493,7 +493,7 @@ static void update_wqe_state(struct rxe_qp *qp, struct rxe_send_wqe *wqe, struct rxe_pkt_info *pkt) { - if (pkt->mask & RXE_END_MASK) { + if (pkt->mask & RXE_LAST_MASK) { if (qp_type(qp) == IB_QPT_RC) wqe->state = wqe_state_pending; } else { @@ -513,7 +513,7 @@ static void update_wqe_psn(struct rxe_qp *qp, if (num_pkt == 0) num_pkt = 1; - if (pkt->mask & RXE_START_MASK) { + if (pkt->mask & RXE_FIRST_MASK) { wqe->first_psn = qp->req.psn; wqe->last_psn = (qp->req.psn + num_pkt - 1) & BTH_PSN_MASK; } @@ -550,7 +550,7 @@ static void update_state(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { qp->req.opcode = pkt->opcode; - if (pkt->mask & RXE_END_MASK) + if (pkt->mask & RXE_LAST_MASK) qp->req.wqe_index = queue_next_index(qp->sq.queue, qp->req.wqe_index); diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 7c336db5cb54..cb560cbe418d 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -147,7 +147,7 @@ static enum resp_states check_psn(struct rxe_qp *qp, case IB_QPT_UC: if (qp->resp.drop_msg || diff != 0) { - if (pkt->mask & RXE_START_MASK) { + if (pkt->mask & RXE_FIRST_MASK) { qp->resp.drop_msg = 0; return RESPST_CHK_OP_SEQ; } @@ -901,7 +901,7 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt) return RESPST_ERR_INVALIDATE_RKEY; } - if (pkt->mask & RXE_END_MASK) + if (pkt->mask & RXE_LAST_MASK) /* We successfully processed this new request. */ qp->resp.msn++; From patchwork Sat Sep 17 03:10:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12978996 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB5C8C6FA82 for ; Sat, 17 Sep 2022 03:11:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229483AbiIQDLt (ORCPT ); Fri, 16 Sep 2022 23:11:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229810AbiIQDLS (ORCPT ); Fri, 16 Sep 2022 23:11:18 -0400 Received: from mail-oa1-x2c.google.com (mail-oa1-x2c.google.com [IPv6:2001:4860:4864:20::2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 821934D26D for ; Fri, 16 Sep 2022 20:11:10 -0700 (PDT) Received: by mail-oa1-x2c.google.com with SMTP id 586e51a60fabf-127ba06d03fso54754329fac.3 for ; Fri, 16 Sep 2022 20:11:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=dZydl9VOtrbPxiFnDDs3otuazpDHAJBf6IwhOk9Dnjk=; b=Bw/lHNqGRd93JcnPwIp4Bl2AGDB57OpzlY3yxMwDw57ZLw5BcRSDkW+5Uhn0CU/Qni TTyxMMz4J62FvJE+4kMxtRhBE63S+iwDwodXk811vSMiNvsXvtq/R7En8vZM5wRNCNnt 391YD/16yUyFiU/rofEJcaUoOFg0NL2aq749/Q5lOuHIFfX8AmKqcy51OVu6dqU4GIxU OpEex25FlEzNqEAZsDhVfaLI4uWIKtTu2C+GSwyW1dDTcc5l5f9JZE6urCBmeoJ/Yzzy 56HBN3CyrsYkpJ6DIixrUG66bv6OMYki1IxlgWKwoBZf1CexIdfbw347XuPP1bRfQHFs fujw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=dZydl9VOtrbPxiFnDDs3otuazpDHAJBf6IwhOk9Dnjk=; b=ybndBvYMl+cbtZoeAqq0Qcfh65mMOHKbnyt+vq3KF0oymA6zQOc873KJ4WWF+BLIFn F4HPtmiL2Ms00tf6YH3kUjM0R2M/v/GOxBa9hRVFnZW868R+rUreIuadATubIGL7lICb eg1NNgEk4rRD8wMW12jos1iyCUmMd5X5mYoUffR0R14fKiNoDE50N5Tv5u2n8cMK8WoN eUK+8dfYRPi4OsYs0Ax7FDzH8IlTW9mxrebzjg5cCOILDGSFH+m8GAMy1k8kCmGERvxL UXlg6hULJZhOm1yE12L8aX0FCMs1JuYAMQh+46t5UukojnauWGPIzmsEAsERSpFLQKAv DMxA== X-Gm-Message-State: ACrzQf0KLQRWt8zKwD/pQ+tIEFVgVo53s8BSDA+N0OafuA3KffUWL7Xa lFX2MWvGp/AOrS8hoO2wbeI= X-Google-Smtp-Source: AMsMyM7oKYcr3MZxj0holeyqZFI4nwLBomp46ROYPAC3DOTYSk1zuBd5Si5e4V1nooxB8eucF9RCtQ== X-Received: by 2002:a05:6870:a70f:b0:127:666a:658 with SMTP id g15-20020a056870a70f00b00127666a0658mr4583733oam.218.1663384269600; Fri, 16 Sep 2022 20:11:09 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id q17-20020a4a6c11000000b00475f26931c8sm921422ooc.13.2022.09.16.20.11.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:11:08 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 02/13] RDMA/rxe: Move next_opcode() to rxe_opcode.c Date: Fri, 16 Sep 2022 22:10:53 -0500 Message-Id: <20220917031104.21222-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031104.21222-1-rpearsonhpe@gmail.com> References: <20220917031104.21222-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move next_opcode() from rxe_req.c to rxe_opcode.c. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 3 + drivers/infiniband/sw/rxe/rxe_opcode.c | 156 ++++++++++++++++++++++++- drivers/infiniband/sw/rxe/rxe_req.c | 156 ------------------------- 3 files changed, 157 insertions(+), 158 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 22f6cc31d1d6..5526d83697c7 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -99,6 +99,9 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct sk_buff *skb); const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num); +/* opcode.c */ +int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, u32 opcode); + /* rxe_qp.c */ int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init); int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c index 0ea587c15931..6b1a1f197c4d 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.c +++ b/drivers/infiniband/sw/rxe/rxe_opcode.c @@ -5,8 +5,8 @@ */ #include -#include "rxe_opcode.h" -#include "rxe_hdr.h" + +#include "rxe.h" /* useful information about work request opcodes and pkt opcodes in * table form @@ -919,3 +919,155 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { }, }; + +static int next_opcode_rc(struct rxe_qp *qp, u32 opcode, int fits) +{ + switch (opcode) { + case IB_WR_RDMA_WRITE: + if (qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST || + qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_RC_RDMA_WRITE_LAST : + IB_OPCODE_RC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_RC_RDMA_WRITE_ONLY : + IB_OPCODE_RC_RDMA_WRITE_FIRST; + + case IB_WR_RDMA_WRITE_WITH_IMM: + if (qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST || + qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_RC_RDMA_WRITE_LAST_WITH_IMMEDIATE : + IB_OPCODE_RC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_RC_RDMA_WRITE_ONLY_WITH_IMMEDIATE : + IB_OPCODE_RC_RDMA_WRITE_FIRST; + + case IB_WR_SEND: + if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) + return fits ? + IB_OPCODE_RC_SEND_LAST : + IB_OPCODE_RC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_RC_SEND_ONLY : + IB_OPCODE_RC_SEND_FIRST; + + case IB_WR_SEND_WITH_IMM: + if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) + return fits ? + IB_OPCODE_RC_SEND_LAST_WITH_IMMEDIATE : + IB_OPCODE_RC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_RC_SEND_ONLY_WITH_IMMEDIATE : + IB_OPCODE_RC_SEND_FIRST; + + case IB_WR_RDMA_READ: + return IB_OPCODE_RC_RDMA_READ_REQUEST; + + case IB_WR_ATOMIC_CMP_AND_SWP: + return IB_OPCODE_RC_COMPARE_SWAP; + + case IB_WR_ATOMIC_FETCH_AND_ADD: + return IB_OPCODE_RC_FETCH_ADD; + + case IB_WR_SEND_WITH_INV: + if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) + return fits ? IB_OPCODE_RC_SEND_LAST_WITH_INVALIDATE : + IB_OPCODE_RC_SEND_MIDDLE; + else + return fits ? IB_OPCODE_RC_SEND_ONLY_WITH_INVALIDATE : + IB_OPCODE_RC_SEND_FIRST; + case IB_WR_REG_MR: + case IB_WR_LOCAL_INV: + return opcode; + } + + return -EINVAL; +} + +static int next_opcode_uc(struct rxe_qp *qp, u32 opcode, int fits) +{ + switch (opcode) { + case IB_WR_RDMA_WRITE: + if (qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST || + qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_UC_RDMA_WRITE_LAST : + IB_OPCODE_UC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_UC_RDMA_WRITE_ONLY : + IB_OPCODE_UC_RDMA_WRITE_FIRST; + + case IB_WR_RDMA_WRITE_WITH_IMM: + if (qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST || + qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_UC_RDMA_WRITE_LAST_WITH_IMMEDIATE : + IB_OPCODE_UC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_UC_RDMA_WRITE_ONLY_WITH_IMMEDIATE : + IB_OPCODE_UC_RDMA_WRITE_FIRST; + + case IB_WR_SEND: + if (qp->req.opcode == IB_OPCODE_UC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_UC_SEND_MIDDLE) + return fits ? + IB_OPCODE_UC_SEND_LAST : + IB_OPCODE_UC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_UC_SEND_ONLY : + IB_OPCODE_UC_SEND_FIRST; + + case IB_WR_SEND_WITH_IMM: + if (qp->req.opcode == IB_OPCODE_UC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_UC_SEND_MIDDLE) + return fits ? + IB_OPCODE_UC_SEND_LAST_WITH_IMMEDIATE : + IB_OPCODE_UC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_UC_SEND_ONLY_WITH_IMMEDIATE : + IB_OPCODE_UC_SEND_FIRST; + } + + return -EINVAL; +} + +int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, u32 opcode) +{ + int fits = (wqe->dma.resid <= qp->mtu); + + switch (qp_type(qp)) { + case IB_QPT_RC: + return next_opcode_rc(qp, opcode, fits); + + case IB_QPT_UC: + return next_opcode_uc(qp, opcode, fits); + + case IB_QPT_UD: + case IB_QPT_GSI: + switch (opcode) { + case IB_WR_SEND: + return IB_OPCODE_UD_SEND_ONLY; + + case IB_WR_SEND_WITH_IMM: + return IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE; + } + break; + + default: + break; + } + + return -EINVAL; +} diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index e136abc802af..d2a9abfed596 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -11,9 +11,6 @@ #include "rxe_loc.h" #include "rxe_queue.h" -static int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, - u32 opcode); - static inline void retry_first_write_send(struct rxe_qp *qp, struct rxe_send_wqe *wqe, int npsn) { @@ -194,159 +191,6 @@ static int rxe_wqe_is_fenced(struct rxe_qp *qp, struct rxe_send_wqe *wqe) atomic_read(&qp->req.rd_atomic) != qp->attr.max_rd_atomic; } -static int next_opcode_rc(struct rxe_qp *qp, u32 opcode, int fits) -{ - switch (opcode) { - case IB_WR_RDMA_WRITE: - if (qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST || - qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE) - return fits ? - IB_OPCODE_RC_RDMA_WRITE_LAST : - IB_OPCODE_RC_RDMA_WRITE_MIDDLE; - else - return fits ? - IB_OPCODE_RC_RDMA_WRITE_ONLY : - IB_OPCODE_RC_RDMA_WRITE_FIRST; - - case IB_WR_RDMA_WRITE_WITH_IMM: - if (qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST || - qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE) - return fits ? - IB_OPCODE_RC_RDMA_WRITE_LAST_WITH_IMMEDIATE : - IB_OPCODE_RC_RDMA_WRITE_MIDDLE; - else - return fits ? - IB_OPCODE_RC_RDMA_WRITE_ONLY_WITH_IMMEDIATE : - IB_OPCODE_RC_RDMA_WRITE_FIRST; - - case IB_WR_SEND: - if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) - return fits ? - IB_OPCODE_RC_SEND_LAST : - IB_OPCODE_RC_SEND_MIDDLE; - else - return fits ? - IB_OPCODE_RC_SEND_ONLY : - IB_OPCODE_RC_SEND_FIRST; - - case IB_WR_SEND_WITH_IMM: - if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) - return fits ? - IB_OPCODE_RC_SEND_LAST_WITH_IMMEDIATE : - IB_OPCODE_RC_SEND_MIDDLE; - else - return fits ? - IB_OPCODE_RC_SEND_ONLY_WITH_IMMEDIATE : - IB_OPCODE_RC_SEND_FIRST; - - case IB_WR_RDMA_READ: - return IB_OPCODE_RC_RDMA_READ_REQUEST; - - case IB_WR_ATOMIC_CMP_AND_SWP: - return IB_OPCODE_RC_COMPARE_SWAP; - - case IB_WR_ATOMIC_FETCH_AND_ADD: - return IB_OPCODE_RC_FETCH_ADD; - - case IB_WR_SEND_WITH_INV: - if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) - return fits ? IB_OPCODE_RC_SEND_LAST_WITH_INVALIDATE : - IB_OPCODE_RC_SEND_MIDDLE; - else - return fits ? IB_OPCODE_RC_SEND_ONLY_WITH_INVALIDATE : - IB_OPCODE_RC_SEND_FIRST; - case IB_WR_REG_MR: - case IB_WR_LOCAL_INV: - return opcode; - } - - return -EINVAL; -} - -static int next_opcode_uc(struct rxe_qp *qp, u32 opcode, int fits) -{ - switch (opcode) { - case IB_WR_RDMA_WRITE: - if (qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST || - qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE) - return fits ? - IB_OPCODE_UC_RDMA_WRITE_LAST : - IB_OPCODE_UC_RDMA_WRITE_MIDDLE; - else - return fits ? - IB_OPCODE_UC_RDMA_WRITE_ONLY : - IB_OPCODE_UC_RDMA_WRITE_FIRST; - - case IB_WR_RDMA_WRITE_WITH_IMM: - if (qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST || - qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE) - return fits ? - IB_OPCODE_UC_RDMA_WRITE_LAST_WITH_IMMEDIATE : - IB_OPCODE_UC_RDMA_WRITE_MIDDLE; - else - return fits ? - IB_OPCODE_UC_RDMA_WRITE_ONLY_WITH_IMMEDIATE : - IB_OPCODE_UC_RDMA_WRITE_FIRST; - - case IB_WR_SEND: - if (qp->req.opcode == IB_OPCODE_UC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_UC_SEND_MIDDLE) - return fits ? - IB_OPCODE_UC_SEND_LAST : - IB_OPCODE_UC_SEND_MIDDLE; - else - return fits ? - IB_OPCODE_UC_SEND_ONLY : - IB_OPCODE_UC_SEND_FIRST; - - case IB_WR_SEND_WITH_IMM: - if (qp->req.opcode == IB_OPCODE_UC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_UC_SEND_MIDDLE) - return fits ? - IB_OPCODE_UC_SEND_LAST_WITH_IMMEDIATE : - IB_OPCODE_UC_SEND_MIDDLE; - else - return fits ? - IB_OPCODE_UC_SEND_ONLY_WITH_IMMEDIATE : - IB_OPCODE_UC_SEND_FIRST; - } - - return -EINVAL; -} - -static int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, - u32 opcode) -{ - int fits = (wqe->dma.resid <= qp->mtu); - - switch (qp_type(qp)) { - case IB_QPT_RC: - return next_opcode_rc(qp, opcode, fits); - - case IB_QPT_UC: - return next_opcode_uc(qp, opcode, fits); - - case IB_QPT_UD: - case IB_QPT_GSI: - switch (opcode) { - case IB_WR_SEND: - return IB_OPCODE_UD_SEND_ONLY; - - case IB_WR_SEND_WITH_IMM: - return IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE; - } - break; - - default: - break; - } - - return -EINVAL; -} - static inline int check_init_depth(struct rxe_qp *qp, struct rxe_send_wqe *wqe) { int depth; From patchwork Sat Sep 17 03:10:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12979007 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98148ECAAD8 for ; Sat, 17 Sep 2022 03:12:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229728AbiIQDL6 (ORCPT ); Fri, 16 Sep 2022 23:11:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54520 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229733AbiIQDLS (ORCPT ); Fri, 16 Sep 2022 23:11:18 -0400 Received: from mail-oa1-x2b.google.com (mail-oa1-x2b.google.com [IPv6:2001:4860:4864:20::2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B330380523 for ; Fri, 16 Sep 2022 20:11:10 -0700 (PDT) Received: by mail-oa1-x2b.google.com with SMTP id 586e51a60fabf-1274ec87ad5so54802429fac.0 for ; Fri, 16 Sep 2022 20:11:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=1O4gVNR3dcE8FWJuOdKt0Ng7mhdOLr6Amkjy6/uxb+o=; b=GfM6hLQTFxfWCA/daXE+BKluySstWmo2hgO3FTfPTnOMpLJuRcawYnZVSAyFgPuCKQ g5TNW/kI2YZriemNclMyQL4nRxGVwxPM10CmpTtvXDdArd+ATqrYv2QwWJ8GDr47BVjN ah1TlhmbvxjP3x4sQitIpwGdz5QHPnqsVCtcRJvXAO9ucseH3vTU/1ykag5XcrubYMXw mn8j9kdra/q60rtE2c9eqdLyvhns854OI/ETRtiLitK1NCWbaYhRJ4pu0NjGq+OJZj/Q qx3XADj2IhZY0FPIqv7Y265HGbP+NE+wRM1ipE2gf2o+X5B9UOIOrqK2BS1Om7/J+mTR DjBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=1O4gVNR3dcE8FWJuOdKt0Ng7mhdOLr6Amkjy6/uxb+o=; b=v9+2lMEInLRTHONFrQfKTKKbyIa5M7SpNVTTlFUuFUgrJyZS2Bstz5y5cExG+u/aqJ R/A5n5xouVpgCcrkUs14y+9l+7kA6BTPCT/rlMBpvP80tnuT9so9rCTTwaqsy2g1lNfh Y+IcMl5u/LxEkQ6YP/t9TiWpaRPMpMc02t/Hf1SovAcNsoXQAkJ/zTsOkgk3xt2BkHtt RZmdW5eI9+F+MZFPgUBOBdRszgg9T0LuWKLx66bp1Zx72187sViZoHbm27SB7IuAz9k5 GBAhO/S7F4T0EHTlzHGq6nnKvAsxHa4civ0aVxF8F9pihXYEGJqGOmuOpB+Mu6ql7llp jhAA== X-Gm-Message-State: ACgBeo0X6PDuBMu7Zz2TTjTHyYTYMGIQYXsdLEGo8ONzMA/dg1oqu18X nKe7vomimSEcYGjy7zQJn4MCCzQ7jr0= X-Google-Smtp-Source: AA6agR5p8SqFqEVzYCc+qLr6Ss9RmIPrDeK2Iv3QQGsGeX5gzqDUQeRczj3bOJ9ZsTIe92V5WyWGkg== X-Received: by 2002:a05:6870:d594:b0:12b:f19b:6e6d with SMTP id u20-20020a056870d59400b0012bf19b6e6dmr10409842oao.115.1663384270491; Fri, 16 Sep 2022 20:11:10 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id q17-20020a4a6c11000000b00475f26931c8sm921422ooc.13.2022.09.16.20.11.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:11:09 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 03/13] RDMA: Add xrc opcodes to ib_pack.h Date: Fri, 16 Sep 2022 22:10:54 -0500 Message-Id: <20220917031104.21222-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031104.21222-1-rpearsonhpe@gmail.com> References: <20220917031104.21222-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend ib_pack.h to include xrc opcodes. Signed-off-by: Bob Pearson --- include/rdma/ib_pack.h | 32 +++++++++++++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-) diff --git a/include/rdma/ib_pack.h b/include/rdma/ib_pack.h index a9162f25beaf..cc9aac05d38e 100644 --- a/include/rdma/ib_pack.h +++ b/include/rdma/ib_pack.h @@ -56,8 +56,11 @@ enum { IB_OPCODE_UD = 0x60, /* per IBTA 1.3 vol 1 Table 38, A10.3.2 */ IB_OPCODE_CNP = 0x80, + IB_OPCODE_XRC = 0xa0, /* Manufacturer specific */ IB_OPCODE_MSP = 0xe0, + /* opcode type bits */ + IB_OPCODE_TYPE = 0xe0, /* operations -- just used to define real constants */ IB_OPCODE_SEND_FIRST = 0x00, @@ -84,6 +87,8 @@ enum { /* opcode 0x15 is reserved */ IB_OPCODE_SEND_LAST_WITH_INVALIDATE = 0x16, IB_OPCODE_SEND_ONLY_WITH_INVALIDATE = 0x17, + /* opcode command bits */ + IB_OPCODE_CMD = 0x1f, /* real constants follow -- see comment about above IB_OPCODE() macro for more details */ @@ -152,7 +157,32 @@ enum { /* UD */ IB_OPCODE(UD, SEND_ONLY), - IB_OPCODE(UD, SEND_ONLY_WITH_IMMEDIATE) + IB_OPCODE(UD, SEND_ONLY_WITH_IMMEDIATE), + + /* XRC */ + IB_OPCODE(XRC, SEND_FIRST), + IB_OPCODE(XRC, SEND_MIDDLE), + IB_OPCODE(XRC, SEND_LAST), + IB_OPCODE(XRC, SEND_LAST_WITH_IMMEDIATE), + IB_OPCODE(XRC, SEND_ONLY), + IB_OPCODE(XRC, SEND_ONLY_WITH_IMMEDIATE), + IB_OPCODE(XRC, RDMA_WRITE_FIRST), + IB_OPCODE(XRC, RDMA_WRITE_MIDDLE), + IB_OPCODE(XRC, RDMA_WRITE_LAST), + IB_OPCODE(XRC, RDMA_WRITE_LAST_WITH_IMMEDIATE), + IB_OPCODE(XRC, RDMA_WRITE_ONLY), + IB_OPCODE(XRC, RDMA_WRITE_ONLY_WITH_IMMEDIATE), + IB_OPCODE(XRC, RDMA_READ_REQUEST), + IB_OPCODE(XRC, RDMA_READ_RESPONSE_FIRST), + IB_OPCODE(XRC, RDMA_READ_RESPONSE_MIDDLE), + IB_OPCODE(XRC, RDMA_READ_RESPONSE_LAST), + IB_OPCODE(XRC, RDMA_READ_RESPONSE_ONLY), + IB_OPCODE(XRC, ACKNOWLEDGE), + IB_OPCODE(XRC, ATOMIC_ACKNOWLEDGE), + IB_OPCODE(XRC, COMPARE_SWAP), + IB_OPCODE(XRC, FETCH_ADD), + IB_OPCODE(XRC, SEND_LAST_WITH_INVALIDATE), + IB_OPCODE(XRC, SEND_ONLY_WITH_INVALIDATE), }; enum { From patchwork Sat Sep 17 03:10:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12979008 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4077ECAAA1 for ; Sat, 17 Sep 2022 03:12:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229540AbiIQDL7 (ORCPT ); Fri, 16 Sep 2022 23:11:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229761AbiIQDLT (ORCPT ); Fri, 16 Sep 2022 23:11:19 -0400 Received: from mail-oa1-x2b.google.com (mail-oa1-x2b.google.com [IPv6:2001:4860:4864:20::2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77427BBA42 for ; Fri, 16 Sep 2022 20:11:12 -0700 (PDT) Received: by mail-oa1-x2b.google.com with SMTP id 586e51a60fabf-11e9a7135easo54743066fac.6 for ; Fri, 16 Sep 2022 20:11:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=9I4wcPZ1fJouFFXV3leJFvUefnvjt0shk2pKmQXTP1E=; b=mM4irOijWaCZqEAZE8HWi3KK5spV+AVqRUQpv8cVIObvTR42l4rIN18SS0IF71OJP4 b9P8mVjKcdZ/toP1IuJZeMTmFAyEUpKRfipcH80Fiy5TdiqaZs4ac14hys3Mn+jylwm1 N4r5/JTVQM19HZ1cHlAB91IXj710UMGQrRPFKn3eR/TqIMkfV8k225U3toQ/ORczwXQ2 hIdvyOw5dXEbiowQNXn6cikEpOsTel49oe9p9wSREDrstdKB/r6VrHOWXVrFBoQiYHXr PaT9ZcBWgIH9tecjXy8AyjUBOjxlJzrd72sTzE9+EvJbx7F8el/T9AYa7pwIHvfVKGab lWYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=9I4wcPZ1fJouFFXV3leJFvUefnvjt0shk2pKmQXTP1E=; b=gQ1iMOkCRZvSVKjLSclJSUTYtbbKjuaczzwRtE+PGv0lAavkxtwqyzvrim7JAi9GIw XRDSvwx2VbkJ+/iIvFEmXzg/n1HMsMtUzFgvJrTM33MLU7QhBsB0u/tR65bZ0+qjvW4D U4qVWz2tTe7Q+3BUDtgQVUgPjGNF//sm7BUU2q5WpA0gCrT1GbsSRC3LgbCW55GUKBIa QYoDeyrEzDPzGq/hNSaGDu3oQXdCWd/0EVISQ9GvJjFtW3sdEo7V4ndtm/ifRUz1Ncp/ y7YnKQ6+KP1GYkarcfiQMCCMuIVjZxekkr79xvj2SxWIYrie+GSX8+N0OUeKzoYofo+Q kIqw== X-Gm-Message-State: ACrzQf1xBf32T0s+X4KwCvOy2XVw9NWLyMCvqjqW75zx1SQ6WPadBfh/ JjH425pHrWrUOppIKz1xIfY= X-Google-Smtp-Source: AMsMyM7tll6RzrsFvW7+5Ik6l7IZo6r7Uq/pharIjM8nX/dysQgrg0BqN+jqGL4wMaEy4owSOCg0xQ== X-Received: by 2002:a05:6870:d28a:b0:12b:7ed3:cf38 with SMTP id d10-20020a056870d28a00b0012b7ed3cf38mr4666701oae.138.1663384271488; Fri, 16 Sep 2022 20:11:11 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id q17-20020a4a6c11000000b00475f26931c8sm921422ooc.13.2022.09.16.20.11.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:11:11 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 04/13] RDMA/rxe: Extend opcodes and headers to support xrc Date: Fri, 16 Sep 2022 22:10:55 -0500 Message-Id: <20220917031104.21222-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031104.21222-1-rpearsonhpe@gmail.com> References: <20220917031104.21222-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend rxe_hdr.h to include the xrceth header and extend opcode tables in rxe_opcode.c to support xrc operations and qps. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_hdr.h | 36 +++ drivers/infiniband/sw/rxe/rxe_opcode.c | 379 +++++++++++++++++++++++-- drivers/infiniband/sw/rxe/rxe_opcode.h | 4 +- 3 files changed, 395 insertions(+), 24 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_hdr.h b/drivers/infiniband/sw/rxe/rxe_hdr.h index e432f9e37795..e947bcf75209 100644 --- a/drivers/infiniband/sw/rxe/rxe_hdr.h +++ b/drivers/infiniband/sw/rxe/rxe_hdr.h @@ -900,6 +900,41 @@ static inline void ieth_set_rkey(struct rxe_pkt_info *pkt, u32 rkey) rxe_opcode[pkt->opcode].offset[RXE_IETH], rkey); } +/****************************************************************************** + * XRC Extended Transport Header + ******************************************************************************/ +struct rxe_xrceth { + __be32 srqn; +}; + +#define XRCETH_SRQN_MASK (0x00ffffff) + +static inline u32 __xrceth_srqn(void *arg) +{ + struct rxe_xrceth *xrceth = arg; + + return be32_to_cpu(xrceth->srqn); +} + +static inline void __xrceth_set_srqn(void *arg, u32 srqn) +{ + struct rxe_xrceth *xrceth = arg; + + xrceth->srqn = cpu_to_be32(srqn & XRCETH_SRQN_MASK); +} + +static inline u32 xrceth_srqn(struct rxe_pkt_info *pkt) +{ + return __xrceth_srqn(pkt->hdr + + rxe_opcode[pkt->opcode].offset[RXE_XRCETH]); +} + +static inline void xrceth_set_srqn(struct rxe_pkt_info *pkt, u32 srqn) +{ + __xrceth_set_srqn(pkt->hdr + + rxe_opcode[pkt->opcode].offset[RXE_XRCETH], srqn); +} + enum rxe_hdr_length { RXE_BTH_BYTES = sizeof(struct rxe_bth), RXE_DETH_BYTES = sizeof(struct rxe_deth), @@ -909,6 +944,7 @@ enum rxe_hdr_length { RXE_ATMACK_BYTES = sizeof(struct rxe_atmack), RXE_ATMETH_BYTES = sizeof(struct rxe_atmeth), RXE_IETH_BYTES = sizeof(struct rxe_ieth), + RXE_XRCETH_BYTES = sizeof(struct rxe_xrceth), RXE_RDETH_BYTES = sizeof(struct rxe_rdeth), }; diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c index 6b1a1f197c4d..4ae926a37ef8 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.c +++ b/drivers/infiniband/sw/rxe/rxe_opcode.c @@ -15,51 +15,58 @@ struct rxe_wr_opcode_info rxe_wr_opcode_info[] = { [IB_WR_RDMA_WRITE] = { .name = "IB_WR_RDMA_WRITE", .mask = { - [IB_QPT_RC] = WR_INLINE_MASK | WR_WRITE_MASK, - [IB_QPT_UC] = WR_INLINE_MASK | WR_WRITE_MASK, + [IB_QPT_RC] = WR_INLINE_MASK | WR_WRITE_MASK, + [IB_QPT_UC] = WR_INLINE_MASK | WR_WRITE_MASK, + [IB_QPT_XRC_INI] = WR_INLINE_MASK | WR_WRITE_MASK, }, }, [IB_WR_RDMA_WRITE_WITH_IMM] = { .name = "IB_WR_RDMA_WRITE_WITH_IMM", .mask = { - [IB_QPT_RC] = WR_INLINE_MASK | WR_WRITE_MASK, - [IB_QPT_UC] = WR_INLINE_MASK | WR_WRITE_MASK, + [IB_QPT_RC] = WR_INLINE_MASK | WR_WRITE_MASK, + [IB_QPT_UC] = WR_INLINE_MASK | WR_WRITE_MASK, + [IB_QPT_XRC_INI] = WR_INLINE_MASK | WR_WRITE_MASK, }, }, [IB_WR_SEND] = { .name = "IB_WR_SEND", .mask = { - [IB_QPT_GSI] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_RC] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_UC] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_UD] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_GSI] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_RC] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_UC] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_UD] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_XRC_INI] = WR_INLINE_MASK | WR_SEND_MASK, }, }, [IB_WR_SEND_WITH_IMM] = { .name = "IB_WR_SEND_WITH_IMM", .mask = { - [IB_QPT_GSI] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_RC] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_UC] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_UD] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_GSI] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_RC] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_UC] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_UD] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_XRC_INI] = WR_INLINE_MASK | WR_SEND_MASK, }, }, [IB_WR_RDMA_READ] = { .name = "IB_WR_RDMA_READ", .mask = { - [IB_QPT_RC] = WR_READ_MASK, + [IB_QPT_RC] = WR_READ_MASK, + [IB_QPT_XRC_INI] = WR_READ_MASK, }, }, [IB_WR_ATOMIC_CMP_AND_SWP] = { .name = "IB_WR_ATOMIC_CMP_AND_SWP", .mask = { - [IB_QPT_RC] = WR_ATOMIC_MASK, + [IB_QPT_RC] = WR_ATOMIC_MASK, + [IB_QPT_XRC_INI] = WR_ATOMIC_MASK, }, }, [IB_WR_ATOMIC_FETCH_AND_ADD] = { .name = "IB_WR_ATOMIC_FETCH_AND_ADD", .mask = { - [IB_QPT_RC] = WR_ATOMIC_MASK, + [IB_QPT_RC] = WR_ATOMIC_MASK, + [IB_QPT_XRC_INI] = WR_ATOMIC_MASK, }, }, [IB_WR_LSO] = { @@ -71,34 +78,39 @@ struct rxe_wr_opcode_info rxe_wr_opcode_info[] = { [IB_WR_SEND_WITH_INV] = { .name = "IB_WR_SEND_WITH_INV", .mask = { - [IB_QPT_RC] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_UC] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_UD] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_RC] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_UC] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_UD] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_XRC_INI] = WR_INLINE_MASK | WR_SEND_MASK, }, }, [IB_WR_RDMA_READ_WITH_INV] = { .name = "IB_WR_RDMA_READ_WITH_INV", .mask = { - [IB_QPT_RC] = WR_READ_MASK, + [IB_QPT_RC] = WR_READ_MASK, + [IB_QPT_XRC_INI] = WR_READ_MASK, }, }, [IB_WR_LOCAL_INV] = { .name = "IB_WR_LOCAL_INV", .mask = { - [IB_QPT_RC] = WR_LOCAL_OP_MASK, + [IB_QPT_RC] = WR_LOCAL_OP_MASK, + [IB_QPT_XRC_INI] = WR_LOCAL_OP_MASK, }, }, [IB_WR_REG_MR] = { .name = "IB_WR_REG_MR", .mask = { - [IB_QPT_RC] = WR_LOCAL_OP_MASK, + [IB_QPT_RC] = WR_LOCAL_OP_MASK, + [IB_QPT_XRC_INI] = WR_LOCAL_OP_MASK, }, }, [IB_WR_BIND_MW] = { .name = "IB_WR_BIND_MW", .mask = { - [IB_QPT_RC] = WR_LOCAL_OP_MASK, - [IB_QPT_UC] = WR_LOCAL_OP_MASK, + [IB_QPT_RC] = WR_LOCAL_OP_MASK, + [IB_QPT_UC] = WR_LOCAL_OP_MASK, + [IB_QPT_XRC_INI] = WR_LOCAL_OP_MASK, }, }, }; @@ -918,6 +930,327 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { } }, + /* XRC */ + [IB_OPCODE_XRC_SEND_FIRST] = { + .name = "IB_OPCODE_XRC_SEND_FIRST", + .mask = RXE_XRCETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_RWR_MASK | RXE_SEND_MASK | RXE_FIRST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_MIDDLE] = { + .name = "IB_OPCODE_XRC_SEND_MIDDLE", + .mask = RXE_XRCETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_SEND_MASK | RXE_MIDDLE_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_LAST] = { + .name = "IB_OPCODE_XRC_SEND_LAST", + .mask = RXE_XRCETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_COMP_MASK | RXE_SEND_MASK | RXE_LAST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_LAST_WITH_IMMEDIATE] = { + .name = "IB_OPCODE_XRC_SEND_LAST_WITH_IMMEDIATE", + .mask = RXE_XRCETH_MASK | RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_COMP_MASK | RXE_SEND_MASK | + RXE_LAST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_IMMDT_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_IMMDT] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_IMMDT_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_ONLY] = { + .name = "IB_OPCODE_XRC_SEND_ONLY", + .mask = RXE_XRCETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK | + RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_ONLY_WITH_IMMEDIATE] = { + .name = "IB_OPCODE_XRC_SEND_ONLY_WITH_IMMEDIATE", + .mask = RXE_XRCETH_MASK | RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | + RXE_SEND_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_IMMDT_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_IMMDT] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_IMMDT_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_WRITE_FIRST] = { + .name = "IB_OPCODE_XRC_RDMA_WRITE_FIRST", + .mask = RXE_XRCETH_MASK | RXE_RETH_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_FIRST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_RETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_RETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_RETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_WRITE_MIDDLE] = { + .name = "IB_OPCODE_XRC_RDMA_WRITE_MIDDLE", + .mask = RXE_XRCETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_WRITE_MASK | RXE_MIDDLE_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_WRITE_LAST] = { + .name = "IB_OPCODE_XRC_RDMA_WRITE_LAST", + .mask = RXE_XRCETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_WRITE_MASK | RXE_LAST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_WRITE_LAST_WITH_IMMEDIATE] = { + .name = "IB_OPCODE_XRC_RDMA_WRITE_LAST_WITH_IMMEDIATE", + .mask = RXE_XRCETH_MASK | RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_COMP_MASK | + RXE_RWR_MASK | RXE_LAST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_IMMDT_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_IMMDT] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_IMMDT_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_WRITE_ONLY] = { + .name = "IB_OPCODE_XRC_RDMA_WRITE_ONLY", + .mask = RXE_XRCETH_MASK | RXE_RETH_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_RETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_RETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_RETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_WRITE_ONLY_WITH_IMMEDIATE] = { + .name = "IB_OPCODE_XRC_RDMA_WRITE_ONLY_WITH_IMMEDIATE", + .mask = RXE_XRCETH_MASK | RXE_RETH_MASK | RXE_IMMDT_MASK | + RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | + RXE_COMP_MASK | RXE_RWR_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_IMMDT_BYTES + + RXE_RETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_RETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_IMMDT] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_RETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_RETH_BYTES + + RXE_IMMDT_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_READ_REQUEST] = { + .name = "IB_OPCODE_XRC_RDMA_READ_REQUEST", + .mask = RXE_XRCETH_MASK | RXE_RETH_MASK | RXE_REQ_MASK | + RXE_READ_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_RETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_RETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_RETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_READ_RESPONSE_FIRST] = { + .name = "IB_OPCODE_XRC_RDMA_READ_RESPONSE_FIRST", + .mask = RXE_AETH_MASK | RXE_PAYLOAD_MASK | RXE_ACK_MASK | + RXE_FIRST_MASK, + .length = RXE_BTH_BYTES + RXE_AETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_AETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_AETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_READ_RESPONSE_MIDDLE] = { + .name = "IB_OPCODE_XRC_RDMA_READ_RESPONSE_MIDDLE", + .mask = RXE_PAYLOAD_MASK | RXE_ACK_MASK | RXE_MIDDLE_MASK, + .length = RXE_BTH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_PAYLOAD] = RXE_BTH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_READ_RESPONSE_LAST] = { + .name = "IB_OPCODE_XRC_RDMA_READ_RESPONSE_LAST", + .mask = RXE_AETH_MASK | RXE_PAYLOAD_MASK | RXE_ACK_MASK | + RXE_LAST_MASK, + .length = RXE_BTH_BYTES + RXE_AETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_AETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_AETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_READ_RESPONSE_ONLY] = { + .name = "IB_OPCODE_XRC_RDMA_READ_RESPONSE_ONLY", + .mask = RXE_AETH_MASK | RXE_PAYLOAD_MASK | RXE_ACK_MASK | + RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_AETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_AETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_AETH_BYTES, + } + }, + [IB_OPCODE_XRC_ACKNOWLEDGE] = { + .name = "IB_OPCODE_XRC_ACKNOWLEDGE", + .mask = RXE_AETH_MASK | RXE_ACK_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_AETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_AETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_AETH_BYTES, + } + }, + [IB_OPCODE_XRC_ATOMIC_ACKNOWLEDGE] = { + .name = "IB_OPCODE_XRC_ATOMIC_ACKNOWLEDGE", + .mask = RXE_AETH_MASK | RXE_ATMACK_MASK | RXE_ACK_MASK | + RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_ATMACK_BYTES + RXE_AETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_AETH] = RXE_BTH_BYTES, + [RXE_ATMACK] = RXE_BTH_BYTES + + RXE_AETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_AETH_BYTES + + RXE_ATMACK_BYTES, + } + }, + [IB_OPCODE_XRC_COMPARE_SWAP] = { + .name = "IB_OPCODE_XRC_COMPARE_SWAP", + .mask = RXE_XRCETH_MASK | RXE_ATMETH_MASK | RXE_REQ_MASK | + RXE_ATOMIC_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_ATMETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_ATMETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_ATMETH_BYTES, + } + }, + [IB_OPCODE_XRC_FETCH_ADD] = { + .name = "IB_OPCODE_XRC_FETCH_ADD", + .mask = RXE_XRCETH_MASK | RXE_ATMETH_MASK | RXE_REQ_MASK | + RXE_ATOMIC_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_ATMETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_ATMETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_ATMETH_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_LAST_WITH_INVALIDATE] = { + .name = "IB_OPCODE_XRC_SEND_LAST_WITH_INVALIDATE", + .mask = RXE_XRCETH_MASK | RXE_IETH_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_COMP_MASK | RXE_SEND_MASK | + RXE_LAST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_IETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_IETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_IETH_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_ONLY_WITH_INVALIDATE] = { + .name = "IB_OPCODE_XRC_SEND_ONLY_INV", + .mask = RXE_XRCETH_MASK | RXE_IETH_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | + RXE_SEND_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_IETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_IETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_IETH_BYTES, + } + }, }; static int next_opcode_rc(struct rxe_qp *qp, u32 opcode, int fits) diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.h b/drivers/infiniband/sw/rxe/rxe_opcode.h index d2b6a8232e92..5528a47f0266 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.h +++ b/drivers/infiniband/sw/rxe/rxe_opcode.h @@ -30,7 +30,7 @@ enum rxe_wr_mask { struct rxe_wr_opcode_info { char *name; - enum rxe_wr_mask mask[WR_MAX_QPT]; + enum rxe_wr_mask mask[IB_QPT_MAX]; }; extern struct rxe_wr_opcode_info rxe_wr_opcode_info[]; @@ -44,6 +44,7 @@ enum rxe_hdr_type { RXE_ATMETH, RXE_ATMACK, RXE_IETH, + RXE_XRCETH, RXE_RDETH, RXE_DETH, RXE_IMMDT, @@ -61,6 +62,7 @@ enum rxe_hdr_mask { RXE_ATMETH_MASK = BIT(RXE_ATMETH), RXE_ATMACK_MASK = BIT(RXE_ATMACK), RXE_IETH_MASK = BIT(RXE_IETH), + RXE_XRCETH_MASK = BIT(RXE_XRCETH), RXE_RDETH_MASK = BIT(RXE_RDETH), RXE_DETH_MASK = BIT(RXE_DETH), RXE_PAYLOAD_MASK = BIT(RXE_PAYLOAD), From patchwork Sat Sep 17 03:10:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12979005 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1A9CC6FA8B for ; Sat, 17 Sep 2022 03:11:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229484AbiIQDL4 (ORCPT ); Fri, 16 Sep 2022 23:11:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54852 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229813AbiIQDLT (ORCPT ); Fri, 16 Sep 2022 23:11:19 -0400 Received: from mail-ot1-x332.google.com (mail-ot1-x332.google.com [IPv6:2607:f8b0:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B1FC18C03C for ; Fri, 16 Sep 2022 20:11:12 -0700 (PDT) Received: by mail-ot1-x332.google.com with SMTP id x23-20020a056830409700b00655c6dace73so13827621ott.11 for ; Fri, 16 Sep 2022 20:11:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=ozTLBWu+OgVpUPFoK0XaDtZgeXfSbSLcsDTataCzAd4=; b=aUvHwte3LuQf8lkJhhaoqSwrmPjDN7La6QqnDNdVJfOY85GCbdYvx9sD5FHRG3JkQV 2NemwePx+r4Mjj+WIHuW6nbATa/jOUdf2EItzpO7OzyJQAOzHL7CNTZOo+uDShLpj/Ip P46AQnjFDJTExUBS1e/uVDzfarw8/O8IAM9QuZ0U9u8jdsrEPET6hdY1KJKOK0D1WKhq tK3k+i2HmFFmaZFc96zvtQm1pqsuaPd/BO+m9lpfs38Jk4nImiqAm/pfnn4Q63mFCbMK 4knWBV0PLo0rxGbS3LuyvVaHq7FJsaciA12xX6GEv120yGQjSfyVlg/SV7tS5KtRSP4Z 4Wyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=ozTLBWu+OgVpUPFoK0XaDtZgeXfSbSLcsDTataCzAd4=; b=KbQDVTe7y6/wAQ2VXUExM4yUX3WNmZqwHlqQG63rSldGfVz61EGLSFVhMNN+0+eKDe cT+bHlCXarjUBdj2qm51yd2+/LrM4+P4oHaS11vvAWXnR+VoIpI4qBjgN2NOPiSAi+4L HdFs7r+qhvaBUnF87sayXVX3Y3EcqX8Ot/z7uCOPezZUfir5iaHHWujr7Mh6KeCrM9XV auckR0TK0HikdxH2ucTfTuJTKoO6vw2yh4z8SL7w6idPxFvGAHew2S2UpHK2uY6mXIFb IyYBzaXEUYYt/oSecisHqnPqZE4GBLtwv4q/xlOM2TtnWl2f08HrOQcym72O8TLYamf4 wdaA== X-Gm-Message-State: ACrzQf21POHI/yysoa5FM+WaasdGhAu7rAxBnL0kv/bxNmYFHjvi7opA N8audwSgz6V7TF5vn4sHRFXUBtk84ro= X-Google-Smtp-Source: AMsMyM7UxRjA+ap2RHCI1ukO3wGVTj7VwA1Zv/bNOPaCcOwmq8OGDgBrzhv/pAROQXcCTc6oQ2YFPA== X-Received: by 2002:a05:6830:6303:b0:659:f2cc:1508 with SMTP id cg3-20020a056830630300b00659f2cc1508mr119488otb.126.1663384272436; Fri, 16 Sep 2022 20:11:12 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id q17-20020a4a6c11000000b00475f26931c8sm921422ooc.13.2022.09.16.20.11.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:11:11 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 05/13] RDMA/rxe: Add xrc opcodes to next_opcode() Date: Fri, 16 Sep 2022 22:10:56 -0500 Message-Id: <20220917031104.21222-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031104.21222-1-rpearsonhpe@gmail.com> References: <20220917031104.21222-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend next_opcode() to support xrc operations. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_opcode.c | 88 ++++++++++++++++++++++++++ 1 file changed, 88 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c index 4ae926a37ef8..c2bac0ce444a 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.c +++ b/drivers/infiniband/sw/rxe/rxe_opcode.c @@ -1376,6 +1376,91 @@ static int next_opcode_uc(struct rxe_qp *qp, u32 opcode, int fits) return -EINVAL; } +static int next_opcode_xrc(struct rxe_qp *qp, u32 wr_opcode, int fits) +{ + switch (wr_opcode) { + case IB_WR_RDMA_WRITE: + if (qp->req.opcode == IB_OPCODE_XRC_RDMA_WRITE_FIRST || + qp->req.opcode == IB_OPCODE_XRC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_XRC_RDMA_WRITE_LAST : + IB_OPCODE_XRC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_XRC_RDMA_WRITE_ONLY : + IB_OPCODE_XRC_RDMA_WRITE_FIRST; + + case IB_WR_RDMA_WRITE_WITH_IMM: + if (qp->req.opcode == IB_OPCODE_XRC_RDMA_WRITE_FIRST || + qp->req.opcode == IB_OPCODE_XRC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_XRC_RDMA_WRITE_LAST_WITH_IMMEDIATE : + IB_OPCODE_XRC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_XRC_RDMA_WRITE_ONLY_WITH_IMMEDIATE : + IB_OPCODE_XRC_RDMA_WRITE_FIRST; + + case IB_WR_SEND: + if (qp->req.opcode == IB_OPCODE_XRC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_XRC_SEND_MIDDLE) + return fits ? + IB_OPCODE_XRC_SEND_LAST : + IB_OPCODE_XRC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_XRC_SEND_ONLY : + IB_OPCODE_XRC_SEND_FIRST; + + case IB_WR_SEND_WITH_IMM: + if (qp->req.opcode == IB_OPCODE_XRC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_XRC_SEND_MIDDLE) + return fits ? + IB_OPCODE_XRC_SEND_LAST_WITH_IMMEDIATE : + IB_OPCODE_XRC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_XRC_SEND_ONLY_WITH_IMMEDIATE : + IB_OPCODE_XRC_SEND_FIRST; + + case IB_WR_RDMA_READ: + return IB_OPCODE_XRC_RDMA_READ_REQUEST; + + case IB_WR_RDMA_READ_WITH_INV: + return IB_OPCODE_XRC_RDMA_READ_REQUEST; + + case IB_WR_ATOMIC_CMP_AND_SWP: + return IB_OPCODE_XRC_COMPARE_SWAP; + + case IB_WR_MASKED_ATOMIC_CMP_AND_SWP: + return -EOPNOTSUPP; + + case IB_WR_ATOMIC_FETCH_AND_ADD: + return IB_OPCODE_XRC_FETCH_ADD; + + case IB_WR_MASKED_ATOMIC_FETCH_AND_ADD: + return -EOPNOTSUPP; + + case IB_WR_SEND_WITH_INV: + if (qp->req.opcode == IB_OPCODE_XRC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_XRC_SEND_MIDDLE) + return fits ? + IB_OPCODE_XRC_SEND_LAST_WITH_INVALIDATE : + IB_OPCODE_XRC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_XRC_SEND_ONLY_WITH_INVALIDATE : + IB_OPCODE_XRC_SEND_FIRST; + + case IB_WR_LOCAL_INV: + case IB_WR_REG_MR: + case IB_WR_BIND_MW: + return wr_opcode; + } + + return -EINVAL; +} + int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, u32 opcode) { int fits = (wqe->dma.resid <= qp->mtu); @@ -1387,6 +1472,9 @@ int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, u32 opcode) case IB_QPT_UC: return next_opcode_uc(qp, opcode, fits); + case IB_QPT_XRC_INI: + return next_opcode_xrc(qp, opcode, fits); + case IB_QPT_UD: case IB_QPT_GSI: switch (opcode) { From patchwork Sat Sep 17 03:10:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12979004 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 417A2C6FA82 for ; Sat, 17 Sep 2022 03:11:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229608AbiIQDL4 (ORCPT ); Fri, 16 Sep 2022 23:11:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229814AbiIQDLU (ORCPT ); Fri, 16 Sep 2022 23:11:20 -0400 Received: from mail-oa1-x29.google.com (mail-oa1-x29.google.com [IPv6:2001:4860:4864:20::29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 262D6BD0B1 for ; Fri, 16 Sep 2022 20:11:14 -0700 (PDT) Received: by mail-oa1-x29.google.com with SMTP id 586e51a60fabf-127dca21a7dso54672401fac.12 for ; Fri, 16 Sep 2022 20:11:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=/yphHfq1fscJdeg9GonHY1UqInv08T/3vgxuCO7wSGA=; b=cVWL4ZPoU3G2cQ7Pqdz9mJQP4ZV90j1VUPXBE0XpDWNIDZfqVLkgn209Bj0gQQ2ugg pt40MrPy/Ri6GQwmZLFZZ6F5StYtkhOtPbv13p7o6t5PZqe5jI7ci2BPzHbhO9sSUp+E zSZ2WELfYLPYTnff8XOb60DgeeVY1EJCQzlCVqDfPekieThHgMh2ejLiP2t3NwGrhyVc EMFKSSyn6psZ2OxvLU/ZHfxSpzHMOsdCmGRFuBdS1KYkkwCZ3wbqkqeJETYXdozYQDgu YWvu7mMgAXHY4rKFFLT7GP9m4nAEd3Hocnci+42s8E2Q1iA1g3cvG7vtHHK8+TxXB3+d vLcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=/yphHfq1fscJdeg9GonHY1UqInv08T/3vgxuCO7wSGA=; b=7NL+aFE+kx6gA4Nx2vQcDJ8BZB+a3lP6o/b3fZnxT++9PHdFktT5x3C+W7PC8diKbH piomgcdD60h6wXWdL7kJm+AkntCr5FX4yyraYyy+nTVSGRVvX8Nn94uWlP+5Boe0Co1t CEnH926xYPZjtwfefgMCRKOIOpT8GizAsBHOZdYjentEpIkDwx7onJ03wMV3fXXJPHqh WF4aOthOfAp4y5Kli1SJiUzyBOL3P+LUGfGNkjii1JWrYIcX3WA53l3f+WRBwMyU4w2x b8IiJS6p7fbegDXRvL5JBR1yLJQz4stfw65Lwpb4Gx8ua2maRPAwrv6AdyeTTahgySJ0 bVww== X-Gm-Message-State: ACgBeo0elPyUOMzRRskj9etl5clf8xm2K/O1byZyMBC0vK8cw+5fFgiq 69JvgN6u5XoVe6tcCXL5DH8= X-Google-Smtp-Source: AA6agR6LHmibP0B2httn7D3KlNiTpdUb/SPkPUzgI55/BrhopDmZXs4xvsH3/tKbIQ8sntJ0KsiNDw== X-Received: by 2002:a05:6870:f110:b0:127:3735:8e19 with SMTP id k16-20020a056870f11000b0012737358e19mr10217738oac.273.1663384273478; Fri, 16 Sep 2022 20:11:13 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id q17-20020a4a6c11000000b00475f26931c8sm921422ooc.13.2022.09.16.20.11.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:11:12 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 06/13] RDMA/rxe: Implement open_xrcd and close_xrcd Date: Fri, 16 Sep 2022 22:10:57 -0500 Message-Id: <20220917031104.21222-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031104.21222-1-rpearsonhpe@gmail.com> References: <20220917031104.21222-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add rxe_open_xrcd() and rxe_close_xrcd() and add xrcd objects to rxe object pools to implement ib_open_xrcd() and ib_close_xrcd(). Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 2 ++ drivers/infiniband/sw/rxe/rxe_param.h | 3 +++ drivers/infiniband/sw/rxe/rxe_pool.c | 8 ++++++++ drivers/infiniband/sw/rxe/rxe_pool.h | 1 + drivers/infiniband/sw/rxe/rxe_verbs.c | 23 +++++++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_verbs.h | 11 +++++++++++ 6 files changed, 48 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 51daac5c4feb..acd22980836e 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -23,6 +23,7 @@ void rxe_dealloc(struct ib_device *ib_dev) rxe_pool_cleanup(&rxe->uc_pool); rxe_pool_cleanup(&rxe->pd_pool); rxe_pool_cleanup(&rxe->ah_pool); + rxe_pool_cleanup(&rxe->xrcd_pool); rxe_pool_cleanup(&rxe->srq_pool); rxe_pool_cleanup(&rxe->qp_pool); rxe_pool_cleanup(&rxe->cq_pool); @@ -120,6 +121,7 @@ static void rxe_init_pools(struct rxe_dev *rxe) rxe_pool_init(rxe, &rxe->uc_pool, RXE_TYPE_UC); rxe_pool_init(rxe, &rxe->pd_pool, RXE_TYPE_PD); rxe_pool_init(rxe, &rxe->ah_pool, RXE_TYPE_AH); + rxe_pool_init(rxe, &rxe->xrcd_pool, RXE_TYPE_XRCD); rxe_pool_init(rxe, &rxe->srq_pool, RXE_TYPE_SRQ); rxe_pool_init(rxe, &rxe->qp_pool, RXE_TYPE_QP); rxe_pool_init(rxe, &rxe->cq_pool, RXE_TYPE_CQ); diff --git a/drivers/infiniband/sw/rxe/rxe_param.h b/drivers/infiniband/sw/rxe/rxe_param.h index 86c7a8bf3cbb..fa4bf177e123 100644 --- a/drivers/infiniband/sw/rxe/rxe_param.h +++ b/drivers/infiniband/sw/rxe/rxe_param.h @@ -86,6 +86,9 @@ enum rxe_device_param { RXE_MAX_QP_INDEX = DEFAULT_MAX_VALUE, RXE_MAX_QP = DEFAULT_MAX_VALUE - RXE_MIN_QP_INDEX, + RXE_MIN_XRCD_INDEX = 1, + RXE_MAX_XRCD_INDEX = 128, + RXE_MAX_XRCD = 128, RXE_MIN_SRQ_INDEX = 0x00020001, RXE_MAX_SRQ_INDEX = DEFAULT_MAX_VALUE, RXE_MAX_SRQ = DEFAULT_MAX_VALUE - RXE_MIN_SRQ_INDEX, diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index f50620f5a0a1..b54453b68169 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -42,6 +42,14 @@ static const struct rxe_type_info { .max_index = RXE_MAX_AH_INDEX, .max_elem = RXE_MAX_AH_INDEX - RXE_MIN_AH_INDEX + 1, }, + [RXE_TYPE_XRCD] = { + .name = "xrcd", + .size = sizeof(struct rxe_xrcd), + .elem_offset = offsetof(struct rxe_xrcd, elem), + .min_index = RXE_MIN_XRCD_INDEX, + .max_index = RXE_MAX_XRCD_INDEX, + .max_elem = RXE_MAX_XRCD_INDEX - RXE_MIN_XRCD_INDEX + 1, + }, [RXE_TYPE_SRQ] = { .name = "srq", .size = sizeof(struct rxe_srq), diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 9d83cb32092f..35ac0746a4b8 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -11,6 +11,7 @@ enum rxe_elem_type { RXE_TYPE_UC, RXE_TYPE_PD, RXE_TYPE_AH, + RXE_TYPE_XRCD, RXE_TYPE_SRQ, RXE_TYPE_QP, RXE_TYPE_CQ, diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 9ebe9decad34..4a5da079bf11 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -281,6 +281,26 @@ static int post_one_recv(struct rxe_rq *rq, const struct ib_recv_wr *ibwr) return err; } +static int rxe_alloc_xrcd(struct ib_xrcd *ibxrcd, struct ib_udata *udata) +{ + struct rxe_dev *rxe = to_rdev(ibxrcd->device); + struct rxe_xrcd *xrcd = to_rxrcd(ibxrcd); + int err; + + err = rxe_add_to_pool(&rxe->xrcd_pool, xrcd); + + return err; +} + +static int rxe_dealloc_xrcd(struct ib_xrcd *ibxrcd, struct ib_udata *udata) +{ + struct rxe_xrcd *xrcd = to_rxrcd(ibxrcd); + + rxe_cleanup(xrcd); + + return 0; +} + static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, struct ib_udata *udata) { @@ -1055,6 +1075,7 @@ static const struct ib_device_ops rxe_dev_ops = { .alloc_mw = rxe_alloc_mw, .alloc_pd = rxe_alloc_pd, .alloc_ucontext = rxe_alloc_ucontext, + .alloc_xrcd = rxe_alloc_xrcd, .attach_mcast = rxe_attach_mcast, .create_ah = rxe_create_ah, .create_cq = rxe_create_cq, @@ -1065,6 +1086,7 @@ static const struct ib_device_ops rxe_dev_ops = { .dealloc_mw = rxe_dealloc_mw, .dealloc_pd = rxe_dealloc_pd, .dealloc_ucontext = rxe_dealloc_ucontext, + .dealloc_xrcd = rxe_dealloc_xrcd, .dereg_mr = rxe_dereg_mr, .destroy_ah = rxe_destroy_ah, .destroy_cq = rxe_destroy_cq, @@ -1103,6 +1125,7 @@ static const struct ib_device_ops rxe_dev_ops = { INIT_RDMA_OBJ_SIZE(ib_cq, rxe_cq, ibcq), INIT_RDMA_OBJ_SIZE(ib_pd, rxe_pd, ibpd), INIT_RDMA_OBJ_SIZE(ib_qp, rxe_qp, ibqp), + INIT_RDMA_OBJ_SIZE(ib_xrcd, rxe_xrcd, ibxrcd), INIT_RDMA_OBJ_SIZE(ib_srq, rxe_srq, ibsrq), INIT_RDMA_OBJ_SIZE(ib_ucontext, rxe_ucontext, ibuc), INIT_RDMA_OBJ_SIZE(ib_mw, rxe_mw, ibmw), diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index a51819d0c345..6c4cfb802dd4 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -93,6 +93,11 @@ struct rxe_rq { struct rxe_queue *queue; }; +struct rxe_xrcd { + struct ib_xrcd ibxrcd; + struct rxe_pool_elem elem; +}; + struct rxe_srq { struct ib_srq ibsrq; struct rxe_pool_elem elem; @@ -383,6 +388,7 @@ struct rxe_dev { struct rxe_pool uc_pool; struct rxe_pool pd_pool; struct rxe_pool ah_pool; + struct rxe_pool xrcd_pool; struct rxe_pool srq_pool; struct rxe_pool qp_pool; struct rxe_pool cq_pool; @@ -432,6 +438,11 @@ static inline struct rxe_ah *to_rah(struct ib_ah *ah) return ah ? container_of(ah, struct rxe_ah, ibah) : NULL; } +static inline struct rxe_xrcd *to_rxrcd(struct ib_xrcd *ibxrcd) +{ + return ibxrcd ? container_of(ibxrcd, struct rxe_xrcd, ibxrcd) : NULL; +} + static inline struct rxe_srq *to_rsrq(struct ib_srq *srq) { return srq ? container_of(srq, struct rxe_srq, ibsrq) : NULL; From patchwork Sat Sep 17 03:10:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12978998 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81E67ECAAD8 for ; Sat, 17 Sep 2022 03:11:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229604AbiIQDLu (ORCPT ); Fri, 16 Sep 2022 23:11:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229822AbiIQDLU (ORCPT ); Fri, 16 Sep 2022 23:11:20 -0400 Received: from mail-ot1-x32d.google.com (mail-ot1-x32d.google.com [IPv6:2607:f8b0:4864:20::32d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18658BD106 for ; Fri, 16 Sep 2022 20:11:15 -0700 (PDT) Received: by mail-ot1-x32d.google.com with SMTP id u6-20020a056830118600b006595e8f9f3fso3954909otq.1 for ; Fri, 16 Sep 2022 20:11:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=QdO3klLpVE9C39N+UNoV5oSHsEGmmluYbUXlMnbabqs=; b=Pg4U67fC6jUVbhbnRXk2+hBur6wPQpa4HLNNpdgi7Vy+VXJWb3weMc72NckMVdeovz kulN0NCnJKWgJR25KI1+LuGeirItYZKWo0LQVWxhyGkxnT0IeyfBTsMiJQ8vZa5H6jJv 4t08PbwLU7SY9Lam8BKojljY6hoa1vaqfO/tIVQGjdpvbU0Ci0h/+Rcidtdq8UQBn/4q tY1zo/DfsxUb0HFANQ+mRqMpr7SKE0Q9XjniWtr7Qi94qvK4To1kIO5ILv3z7aLJJ4Ng Br63vKC1W16sGoFzMJEaozgKq+X1tz7hDCCeb/RjENebHWuKAdPVSXkxkSCwdZxVp3uq jEqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=QdO3klLpVE9C39N+UNoV5oSHsEGmmluYbUXlMnbabqs=; b=yR1U2VRlauIS0Qxzn5ZswQeT8Nm/Q0oU1wLiOnWyej2X0lvhBjG0URYVqOF92IEdrv ImQBLS7kjRtpMQzcdQDu0FLbYnRdxH3ccHZ9JHSaG552Z3cIvX8C65Mu9dq6E013kNEw lxnqlUoOfRZAodYi7HntuExLSMy3ICjmgVPxqOH6FragpFUVdG1jV1zD5+fCtyp58UUb ojgGJknGBH6UA37iUKEULJhvIimuaH3tieBQN7cV4f9tBrzv/vh4zxVtynOLoNhaMXrl 18jlxGPa0FaYw7JlI61jKKz84zi84QVLEDc8kzLQZbbJJ2g5AcbgIK1exlPThzGI16DN bKKQ== X-Gm-Message-State: ACrzQf3Uhg/xwfHlDaXT9tld8lcYZwEE6JgR440naIlHBSZVeTSuMkfC 3MkoqhcphiCugGggbJwo92A= X-Google-Smtp-Source: AMsMyM4w5CTt7jK7HQM2zkjdvpzDCG2eUzPwW2R5IUjSm2RQLdVRLoNwLnpqXO5AoIQpWM45yvacWw== X-Received: by 2002:a05:6830:2a0d:b0:656:bd3f:253f with SMTP id y13-20020a0568302a0d00b00656bd3f253fmr3587986otu.25.1663384274371; Fri, 16 Sep 2022 20:11:14 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id q17-20020a4a6c11000000b00475f26931c8sm921422ooc.13.2022.09.16.20.11.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:11:13 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 07/13] RDMA/rxe: Extend srq verbs to support xrcd Date: Fri, 16 Sep 2022 22:10:58 -0500 Message-Id: <20220917031104.21222-8-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031104.21222-1-rpearsonhpe@gmail.com> References: <20220917031104.21222-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend srq to support xrcd in create verb Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_srq.c | 131 ++++++++++++++------------ drivers/infiniband/sw/rxe/rxe_verbs.c | 13 +-- drivers/infiniband/sw/rxe/rxe_verbs.h | 8 +- include/uapi/rdma/rdma_user_rxe.h | 4 +- 4 files changed, 83 insertions(+), 73 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c index 02b39498c370..fcd1a58c3900 100644 --- a/drivers/infiniband/sw/rxe/rxe_srq.c +++ b/drivers/infiniband/sw/rxe/rxe_srq.c @@ -11,61 +11,85 @@ int rxe_srq_chk_init(struct rxe_dev *rxe, struct ib_srq_init_attr *init) { struct ib_srq_attr *attr = &init->attr; + int err = -EINVAL; - if (attr->max_wr > rxe->attr.max_srq_wr) { - pr_warn("max_wr(%d) > max_srq_wr(%d)\n", - attr->max_wr, rxe->attr.max_srq_wr); - goto err1; + if (init->srq_type == IB_SRQT_TM) { + err = -EOPNOTSUPP; + goto err_out; } - if (attr->max_wr <= 0) { - pr_warn("max_wr(%d) <= 0\n", attr->max_wr); - goto err1; + if (init->srq_type == IB_SRQT_XRC) { + if (!init->ext.cq || !init->ext.xrc.xrcd) + goto err_out; } + if (attr->max_wr > rxe->attr.max_srq_wr) + goto err_out; + + if (attr->max_wr <= 0) + goto err_out; + if (attr->max_wr < RXE_MIN_SRQ_WR) attr->max_wr = RXE_MIN_SRQ_WR; - if (attr->max_sge > rxe->attr.max_srq_sge) { - pr_warn("max_sge(%d) > max_srq_sge(%d)\n", - attr->max_sge, rxe->attr.max_srq_sge); - goto err1; - } + if (attr->max_sge > rxe->attr.max_srq_sge) + goto err_out; if (attr->max_sge < RXE_MIN_SRQ_SGE) attr->max_sge = RXE_MIN_SRQ_SGE; return 0; -err1: - return -EINVAL; +err_out: + pr_debug("%s: failed err = %d\n", __func__, err); + return err; } int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_init_attr *init, struct ib_udata *udata, struct rxe_create_srq_resp __user *uresp) { - int err; - int srq_wqe_size; + struct rxe_pd *pd = to_rpd(srq->ibsrq.pd); + struct rxe_cq *cq; + struct rxe_xrcd *xrcd; struct rxe_queue *q; - enum queue_type type; + int srq_wqe_size; + int err; + + rxe_get(pd); + srq->pd = pd; srq->ibsrq.event_handler = init->event_handler; srq->ibsrq.srq_context = init->srq_context; srq->limit = init->attr.srq_limit; - srq->srq_num = srq->elem.index; srq->rq.max_wr = init->attr.max_wr; srq->rq.max_sge = init->attr.max_sge; - srq_wqe_size = rcv_wqe_size(srq->rq.max_sge); + if (init->srq_type == IB_SRQT_XRC) { + cq = to_rcq(init->ext.cq); + if (cq) { + rxe_get(cq); + srq->cq = to_rcq(init->ext.cq); + } else { + return -EINVAL; + } + xrcd = to_rxrcd(init->ext.xrc.xrcd); + if (xrcd) { + rxe_get(xrcd); + srq->xrcd = to_rxrcd(init->ext.xrc.xrcd); + } + srq->ibsrq.ext.xrc.srq_num = srq->elem.index; + } spin_lock_init(&srq->rq.producer_lock); spin_lock_init(&srq->rq.consumer_lock); - type = QUEUE_TYPE_FROM_CLIENT; - q = rxe_queue_init(rxe, &srq->rq.max_wr, srq_wqe_size, type); + srq_wqe_size = rcv_wqe_size(srq->rq.max_sge); + q = rxe_queue_init(rxe, &srq->rq.max_wr, srq_wqe_size, + QUEUE_TYPE_FROM_CLIENT); if (!q) { - pr_warn("unable to allocate queue for srq\n"); + pr_debug("%s: srq#%d: unable to allocate queue\n", + __func__, srq->elem.index); return -ENOMEM; } @@ -79,66 +103,45 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, return err; } - if (uresp) { - if (copy_to_user(&uresp->srq_num, &srq->srq_num, - sizeof(uresp->srq_num))) { - rxe_queue_cleanup(q); - return -EFAULT; - } - } - return 0; } int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_attr *attr, enum ib_srq_attr_mask mask) { - if (srq->error) { - pr_warn("srq in error state\n"); - goto err1; - } + int err = -EINVAL; + + if (srq->error) + goto err_out; if (mask & IB_SRQ_MAX_WR) { - if (attr->max_wr > rxe->attr.max_srq_wr) { - pr_warn("max_wr(%d) > max_srq_wr(%d)\n", - attr->max_wr, rxe->attr.max_srq_wr); - goto err1; - } + if (attr->max_wr > rxe->attr.max_srq_wr) + goto err_out; - if (attr->max_wr <= 0) { - pr_warn("max_wr(%d) <= 0\n", attr->max_wr); - goto err1; - } + if (attr->max_wr <= 0) + goto err_out; - if (srq->limit && (attr->max_wr < srq->limit)) { - pr_warn("max_wr (%d) < srq->limit (%d)\n", - attr->max_wr, srq->limit); - goto err1; - } + if (srq->limit && (attr->max_wr < srq->limit)) + goto err_out; if (attr->max_wr < RXE_MIN_SRQ_WR) attr->max_wr = RXE_MIN_SRQ_WR; } if (mask & IB_SRQ_LIMIT) { - if (attr->srq_limit > rxe->attr.max_srq_wr) { - pr_warn("srq_limit(%d) > max_srq_wr(%d)\n", - attr->srq_limit, rxe->attr.max_srq_wr); - goto err1; - } + if (attr->srq_limit > rxe->attr.max_srq_wr) + goto err_out; - if (attr->srq_limit > srq->rq.queue->buf->index_mask) { - pr_warn("srq_limit (%d) > cur limit(%d)\n", - attr->srq_limit, - srq->rq.queue->buf->index_mask); - goto err1; - } + if (attr->srq_limit > srq->rq.queue->buf->index_mask) + goto err_out; } return 0; -err1: - return -EINVAL; +err_out: + pr_debug("%s: srq#%d: failed err = %d\n", __func__, + srq->elem.index, err); + return err; } int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, @@ -182,6 +185,12 @@ void rxe_srq_cleanup(struct rxe_pool_elem *elem) if (srq->pd) rxe_put(srq->pd); + if (srq->cq) + rxe_put(srq->cq); + + if (srq->xrcd) + rxe_put(srq->xrcd); + if (srq->rq.queue) rxe_queue_cleanup(srq->rq.queue); } diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 4a5da079bf11..ef86f0c5890e 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -306,7 +306,6 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, { int err; struct rxe_dev *rxe = to_rdev(ibsrq->device); - struct rxe_pd *pd = to_rpd(ibsrq->pd); struct rxe_srq *srq = to_rsrq(ibsrq); struct rxe_create_srq_resp __user *uresp = NULL; @@ -316,9 +315,6 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, uresp = udata->outbuf; } - if (init->srq_type != IB_SRQT_BASIC) - return -EOPNOTSUPP; - err = rxe_srq_chk_init(rxe, init); if (err) return err; @@ -327,13 +323,11 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, if (err) return err; - rxe_get(pd); - srq->pd = pd; - err = rxe_srq_from_init(rxe, srq, init, udata, uresp); if (err) goto err_cleanup; + rxe_finalize(srq); return 0; err_cleanup: @@ -367,6 +361,7 @@ static int rxe_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr, err = rxe_srq_from_attr(rxe, srq, attr, mask, &ucmd, udata); if (err) return err; + return 0; } @@ -380,6 +375,7 @@ static int rxe_query_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr) attr->max_wr = srq->rq.queue->buf->index_mask; attr->max_sge = srq->rq.max_sge; attr->srq_limit = srq->limit; + return 0; } @@ -546,7 +542,6 @@ static void init_send_wr(struct rxe_qp *qp, struct rxe_send_wr *wr, const struct ib_send_wr *ibwr) { wr->wr_id = ibwr->wr_id; - wr->num_sge = ibwr->num_sge; wr->opcode = ibwr->opcode; wr->send_flags = ibwr->send_flags; @@ -628,6 +623,8 @@ static void init_send_wqe(struct rxe_qp *qp, const struct ib_send_wr *ibwr, return; } + wqe->dma.num_sge = ibwr->num_sge; + if (unlikely(ibwr->send_flags & IB_SEND_INLINE)) copy_inline_data_to_wqe(wqe, ibwr); else diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 6c4cfb802dd4..7dab7fa3ba6c 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -102,13 +102,19 @@ struct rxe_srq { struct ib_srq ibsrq; struct rxe_pool_elem elem; struct rxe_pd *pd; + struct rxe_xrcd *xrcd; /* xrc only */ + struct rxe_cq *cq; /* xrc only */ struct rxe_rq rq; - u32 srq_num; int limit; int error; }; +static inline u32 srq_num(struct rxe_srq *srq) +{ + return srq->ibsrq.ext.xrc.srq_num; +} + enum rxe_qp_state { QP_STATE_RESET, QP_STATE_INIT, diff --git a/include/uapi/rdma/rdma_user_rxe.h b/include/uapi/rdma/rdma_user_rxe.h index f09c5c9e3dd5..514a1b6976fe 100644 --- a/include/uapi/rdma/rdma_user_rxe.h +++ b/include/uapi/rdma/rdma_user_rxe.h @@ -74,7 +74,7 @@ struct rxe_av { struct rxe_send_wr { __aligned_u64 wr_id; - __u32 num_sge; + __u32 srq_num; /* xrc only */ __u32 opcode; __u32 send_flags; union { @@ -191,8 +191,6 @@ struct rxe_create_qp_resp { struct rxe_create_srq_resp { struct mminfo mi; - __u32 srq_num; - __u32 reserved; }; struct rxe_modify_srq_cmd { From patchwork Sat Sep 17 03:10:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12978999 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45111C6FA8B for ; Sat, 17 Sep 2022 03:11:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229492AbiIQDLv (ORCPT ); Fri, 16 Sep 2022 23:11:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229657AbiIQDLV (ORCPT ); Fri, 16 Sep 2022 23:11:21 -0400 Received: from mail-ot1-x32e.google.com (mail-ot1-x32e.google.com [IPv6:2607:f8b0:4864:20::32e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 029DC985BF for ; Fri, 16 Sep 2022 20:11:16 -0700 (PDT) Received: by mail-ot1-x32e.google.com with SMTP id u6-20020a056830118600b006595e8f9f3fso3954922otq.1 for ; Fri, 16 Sep 2022 20:11:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=PIF/daVr5yYMTgWd7zk0PdSI0kMkAwJNKjUQnaP5nw8=; b=TqIN92m4CAKPRFfSXCwboJCbSIZ5toGfWFHAn35dRb1EBh5YeqswhmukTm2lsqWH6O gTfgcz+NRYeM3GwM3/AC338XkhHVJDfb99MhLy+IPedD5zzTzr4Hyq45371uRqtx/j3U rO1zH/aIMK8XaotpD9GQ9toIpxUZI4ndIO/YR32cJ1ZWaNm/4uK/5JdveW5CeOFANBBv z9lNUiYIjYT78iN0jN4vGtKHWbKYXqK4dTx5fMnWmsdkAeZbSrswWWKO1I/b/8EpwiuG 4o9d0XtjKpRCefxT368Vp5KqH3S2suHmt5Hj4b0ALADAt6H+eofGBjJgGKAfW1KZmhR6 FiAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=PIF/daVr5yYMTgWd7zk0PdSI0kMkAwJNKjUQnaP5nw8=; b=FiniBI9vETAV5uziTK574Ol8o4OWTu+0UtNkGwIHW0208HLujLlM1+bvh3JN43Ki1c hPGgeJDVVws3LEiYCyg4QlqCtkLHmtL8Ypn9HYOr55k6sitaN5OZZk5RgxrHmeHsKZ+V xRDvzx88gyLPCTUuDQ5UaRUTzkGMrgaLHSyBO/9VarUZJxLV37KCP3Q/4GtKpxwu09K+ 9vv0arQgtkG0eDDZgfnGKnCTgAvCz04fLAUI5M/wnOUW8/Ws8r2iEzjp+Kaang2BA42r P5FvVGHv5epT9woTmObJ57SgaS54XjKp+g4SdWT9GxMcxiefwb1LlGwLemotzAthb8pq Cxlw== X-Gm-Message-State: ACrzQf1yCRzVmf8FXLpiXp7LZUsNzc9RHdIosOtAQAToMLY0243P1he0 oT+L7kLMD77qZUdseq/VqAc= X-Google-Smtp-Source: AMsMyM7gJOPiHRpCfKz4tH4yr2b1OPHf9J1+dhaARTlVwagE+D39yWOFNLrxHaFr5DN39F1kttCNqQ== X-Received: by 2002:a05:6830:658b:b0:63b:3501:7167 with SMTP id cn11-20020a056830658b00b0063b35017167mr3607628otb.343.1663384275265; Fri, 16 Sep 2022 20:11:15 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id q17-20020a4a6c11000000b00475f26931c8sm921422ooc.13.2022.09.16.20.11.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:11:14 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 08/13] RDMA/rxe: Extend rxe_qp.c to support xrc qps Date: Fri, 16 Sep 2022 22:10:59 -0500 Message-Id: <20220917031104.21222-9-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031104.21222-1-rpearsonhpe@gmail.com> References: <20220917031104.21222-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend code in rxe_qp.c to support xrc qp types. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_av.c | 3 +- drivers/infiniband/sw/rxe/rxe_loc.h | 7 +- drivers/infiniband/sw/rxe/rxe_qp.c | 307 +++++++++++++++----------- drivers/infiniband/sw/rxe/rxe_verbs.c | 22 +- drivers/infiniband/sw/rxe/rxe_verbs.h | 1 + 5 files changed, 200 insertions(+), 140 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_av.c b/drivers/infiniband/sw/rxe/rxe_av.c index 3b05314ca739..c8f3ec53aa79 100644 --- a/drivers/infiniband/sw/rxe/rxe_av.c +++ b/drivers/infiniband/sw/rxe/rxe_av.c @@ -110,7 +110,8 @@ struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt, struct rxe_ah **ahp) if (!pkt || !pkt->qp) return NULL; - if (qp_type(pkt->qp) == IB_QPT_RC || qp_type(pkt->qp) == IB_QPT_UC) + if (qp_type(pkt->qp) == IB_QPT_RC || qp_type(pkt->qp) == IB_QPT_UC || + qp_type(pkt->qp) == IB_QPT_XRC_INI) return &pkt->qp->pri_av; if (!pkt->wqe) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 5526d83697c7..c6fb93a749ad 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -103,11 +103,12 @@ const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num); int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, u32 opcode); /* rxe_qp.c */ -int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init); -int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, +int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp *ibqp, + struct ib_qp_init_attr *init); +int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init, struct rxe_create_qp_resp __user *uresp, - struct ib_pd *ibpd, struct ib_udata *udata); + struct ib_udata *udata); int rxe_qp_to_init(struct rxe_qp *qp, struct ib_qp_init_attr *init); int rxe_qp_chk_attr(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_attr *attr, int mask); diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 1dcbeacb3122..6cbc842b8cbb 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -56,30 +56,42 @@ static int rxe_qp_chk_cap(struct rxe_dev *rxe, struct ib_qp_cap *cap, return -EINVAL; } -int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init) +int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp *ibqp, + struct ib_qp_init_attr *init) { + struct ib_pd *ibpd = ibqp->pd; struct ib_qp_cap *cap = &init->cap; struct rxe_port *port; int port_num = init->port_num; + if (init->create_flags) + return -EOPNOTSUPP; + switch (init->qp_type) { case IB_QPT_GSI: case IB_QPT_RC: case IB_QPT_UC: case IB_QPT_UD: + if (!ibpd || !init->recv_cq || !init->send_cq) + return -EINVAL; + break; + case IB_QPT_XRC_INI: + if (!init->send_cq) + return -EINVAL; + break; + case IB_QPT_XRC_TGT: + if (!init->xrcd) + return -EINVAL; break; default: return -EOPNOTSUPP; } - if (!init->recv_cq || !init->send_cq) { - pr_warn("missing cq\n"); - goto err1; + if (init->qp_type != IB_QPT_XRC_TGT) { + if (rxe_qp_chk_cap(rxe, cap, !!(init->srq || init->xrcd))) + goto err1; } - if (rxe_qp_chk_cap(rxe, cap, !!init->srq)) - goto err1; - if (init->qp_type == IB_QPT_GSI) { if (!rdma_is_port_valid(&rxe->ib_dev, port_num)) { pr_warn("invalid port = %d\n", port_num); @@ -148,49 +160,83 @@ static void cleanup_rd_atomic_resources(struct rxe_qp *qp) static void rxe_qp_init_misc(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init) { - struct rxe_port *port; - u32 qpn; - + qp->ibqp.qp_type = init->qp_type; qp->sq_sig_type = init->sq_sig_type; qp->attr.path_mtu = 1; qp->mtu = ib_mtu_enum_to_int(qp->attr.path_mtu); - qpn = qp->elem.index; - port = &rxe->port; - switch (init->qp_type) { case IB_QPT_GSI: qp->ibqp.qp_num = 1; - port->qp_gsi_index = qpn; + rxe->port.qp_gsi_index = qp->elem.index; qp->attr.port_num = init->port_num; break; default: - qp->ibqp.qp_num = qpn; + qp->ibqp.qp_num = qp->elem.index; break; } spin_lock_init(&qp->state_lock); - spin_lock_init(&qp->req.task.state_lock); - spin_lock_init(&qp->resp.task.state_lock); - spin_lock_init(&qp->comp.task.state_lock); - - spin_lock_init(&qp->sq.sq_lock); - spin_lock_init(&qp->rq.producer_lock); - spin_lock_init(&qp->rq.consumer_lock); - atomic_set(&qp->ssn, 0); atomic_set(&qp->skb_out, 0); } +static int rxe_prepare_send_queue(struct rxe_dev *rxe, struct rxe_qp *qp, + struct ib_qp_init_attr *init, struct ib_udata *udata, + struct rxe_create_qp_resp __user *uresp) +{ + struct rxe_queue *q; + int wqe_size; + int err; + + qp->sq.max_wr = init->cap.max_send_wr; + + wqe_size = init->cap.max_send_sge*sizeof(struct ib_sge); + wqe_size = max_t(int, wqe_size, init->cap.max_inline_data); + + qp->sq.max_sge = wqe_size/sizeof(struct ib_sge); + qp->sq.max_inline = wqe_size; + wqe_size += sizeof(struct rxe_send_wqe); + + q = rxe_queue_init(rxe, &qp->sq.max_wr, wqe_size, + QUEUE_TYPE_FROM_CLIENT); + if (!q) + return -ENOMEM; + + err = do_mmap_info(rxe, uresp ? &uresp->sq_mi : NULL, udata, + q->buf, q->buf_size, &q->ip); + + if (err) { + vfree(q->buf); + kfree(q); + return err; + } + + init->cap.max_send_sge = qp->sq.max_sge; + init->cap.max_inline_data = qp->sq.max_inline; + + qp->sq.queue = q; + + return 0; +} + static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init, struct ib_udata *udata, struct rxe_create_qp_resp __user *uresp) { int err; - int wqe_size; - enum queue_type type; + + err = rxe_prepare_send_queue(rxe, qp, init, udata, uresp); + if (err) + return err; + + spin_lock_init(&qp->sq.sq_lock); + spin_lock_init(&qp->req.task.state_lock); + spin_lock_init(&qp->comp.task.state_lock); + + skb_queue_head_init(&qp->resp_pkts); err = sock_create_kern(&init_net, AF_INET, SOCK_DGRAM, 0, &qp->sk); if (err < 0) @@ -205,32 +251,6 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, * (0xc000 - 0xffff). */ qp->src_port = RXE_ROCE_V2_SPORT + (hash_32(qp_num(qp), 14) & 0x3fff); - qp->sq.max_wr = init->cap.max_send_wr; - - /* These caps are limited by rxe_qp_chk_cap() done by the caller */ - wqe_size = max_t(int, init->cap.max_send_sge * sizeof(struct ib_sge), - init->cap.max_inline_data); - qp->sq.max_sge = init->cap.max_send_sge = - wqe_size / sizeof(struct ib_sge); - qp->sq.max_inline = init->cap.max_inline_data = wqe_size; - wqe_size += sizeof(struct rxe_send_wqe); - - type = QUEUE_TYPE_FROM_CLIENT; - qp->sq.queue = rxe_queue_init(rxe, &qp->sq.max_wr, - wqe_size, type); - if (!qp->sq.queue) - return -ENOMEM; - - err = do_mmap_info(rxe, uresp ? &uresp->sq_mi : NULL, udata, - qp->sq.queue->buf, qp->sq.queue->buf_size, - &qp->sq.queue->ip); - - if (err) { - vfree(qp->sq.queue->buf); - kfree(qp->sq.queue); - qp->sq.queue = NULL; - return err; - } qp->req.wqe_index = queue_get_producer(qp->sq.queue, QUEUE_TYPE_FROM_CLIENT); @@ -240,57 +260,71 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, qp->req.opcode = -1; qp->comp.opcode = -1; - skb_queue_head_init(&qp->req_pkts); - rxe_init_task(&qp->req.task, qp, rxe_requester, "req"); rxe_init_task(&qp->comp.task, qp, rxe_completer, "comp"); qp->qp_timeout_jiffies = 0; /* Can't be set for UD/UC in modify_qp */ - if (init->qp_type == IB_QPT_RC) { + if (init->qp_type == IB_QPT_RC || init->qp_type == IB_QPT_XRC_INI) { timer_setup(&qp->rnr_nak_timer, rnr_nak_timer, 0); timer_setup(&qp->retrans_timer, retransmit_timer, 0); } return 0; } +static int rxe_prepare_recv_queue(struct rxe_dev *rxe, struct rxe_qp *qp, + struct ib_qp_init_attr *init, struct ib_udata *udata, + struct rxe_create_qp_resp __user *uresp) +{ + struct rxe_queue *q; + int wqe_size; + int err; + + qp->rq.max_wr = init->cap.max_recv_wr; + qp->rq.max_sge = init->cap.max_recv_sge; + + wqe_size = sizeof(struct rxe_recv_wqe) + + qp->rq.max_sge*sizeof(struct ib_sge); + + q = rxe_queue_init(rxe, &qp->rq.max_wr, wqe_size, + QUEUE_TYPE_FROM_CLIENT); + if (!q) + return -ENOMEM; + + err = do_mmap_info(rxe, uresp ? &uresp->rq_mi : NULL, udata, + q->buf, q->buf_size, &q->ip); + + if (err) { + vfree(q->buf); + kfree(q); + return err; + } + + qp->rq.queue = q; + + return 0; +} + static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init, struct ib_udata *udata, struct rxe_create_qp_resp __user *uresp) { int err; - int wqe_size; - enum queue_type type; - if (!qp->srq) { - qp->rq.max_wr = init->cap.max_recv_wr; - qp->rq.max_sge = init->cap.max_recv_sge; - - wqe_size = rcv_wqe_size(qp->rq.max_sge); - - pr_debug("qp#%d max_wr = %d, max_sge = %d, wqe_size = %d\n", - qp_num(qp), qp->rq.max_wr, qp->rq.max_sge, wqe_size); - - type = QUEUE_TYPE_FROM_CLIENT; - qp->rq.queue = rxe_queue_init(rxe, &qp->rq.max_wr, - wqe_size, type); - if (!qp->rq.queue) - return -ENOMEM; - - err = do_mmap_info(rxe, uresp ? &uresp->rq_mi : NULL, udata, - qp->rq.queue->buf, qp->rq.queue->buf_size, - &qp->rq.queue->ip); - if (err) { - vfree(qp->rq.queue->buf); - kfree(qp->rq.queue); - qp->rq.queue = NULL; + if (!qp->srq && qp_type(qp) != IB_QPT_XRC_TGT) { + err = rxe_prepare_recv_queue(rxe, qp, init, udata, uresp); + if (err) return err; - } + + spin_lock_init(&qp->rq.producer_lock); + spin_lock_init(&qp->rq.consumer_lock); } - skb_queue_head_init(&qp->resp_pkts); + spin_lock_init(&qp->resp.task.state_lock); + + skb_queue_head_init(&qp->req_pkts); rxe_init_task(&qp->resp.task, qp, rxe_responder, "resp"); @@ -303,64 +337,82 @@ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp, } /* called by the create qp verb */ -int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, +int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init, struct rxe_create_qp_resp __user *uresp, - struct ib_pd *ibpd, struct ib_udata *udata) { int err; + struct rxe_pd *pd = to_rpd(qp->ibqp.pd); struct rxe_cq *rcq = to_rcq(init->recv_cq); struct rxe_cq *scq = to_rcq(init->send_cq); - struct rxe_srq *srq = init->srq ? to_rsrq(init->srq) : NULL; + struct rxe_srq *srq = to_rsrq(init->srq); + struct rxe_xrcd *xrcd = to_rxrcd(init->xrcd); - rxe_get(pd); - rxe_get(rcq); - rxe_get(scq); - if (srq) + if (pd) { + rxe_get(pd); + qp->pd = pd; + } + if (rcq) { + rxe_get(rcq); + qp->rcq = rcq; + atomic_inc(&rcq->num_wq); + } + if (scq) { + rxe_get(scq); + qp->scq = scq; + atomic_inc(&scq->num_wq); + } + if (srq) { rxe_get(srq); - - qp->pd = pd; - qp->rcq = rcq; - qp->scq = scq; - qp->srq = srq; - - atomic_inc(&rcq->num_wq); - atomic_inc(&scq->num_wq); + qp->srq = srq; + } + if (xrcd) { + rxe_get(xrcd); + qp->xrcd = xrcd; + } rxe_qp_init_misc(rxe, qp, init); - err = rxe_qp_init_req(rxe, qp, init, udata, uresp); - if (err) - goto err1; + switch (init->qp_type) { + case IB_QPT_RC: + case IB_QPT_UC: + case IB_QPT_GSI: + case IB_QPT_UD: + err = rxe_qp_init_req(rxe, qp, init, udata, uresp); + if (err) + goto err_out; - err = rxe_qp_init_resp(rxe, qp, init, udata, uresp); - if (err) - goto err2; + err = rxe_qp_init_resp(rxe, qp, init, udata, uresp); + if (err) + goto err_unwind; + break; + case IB_QPT_XRC_INI: + err = rxe_qp_init_req(rxe, qp, init, udata, uresp); + if (err) + goto err_out; + break; + case IB_QPT_XRC_TGT: + err = rxe_qp_init_resp(rxe, qp, init, udata, uresp); + if (err) + goto err_out; + break; + default: + /* not reached */ + err = -EOPNOTSUPP; + goto err_out; + }; qp->attr.qp_state = IB_QPS_RESET; qp->valid = 1; return 0; -err2: +err_unwind: rxe_queue_cleanup(qp->sq.queue); qp->sq.queue = NULL; -err1: - atomic_dec(&rcq->num_wq); - atomic_dec(&scq->num_wq); - - qp->pd = NULL; - qp->rcq = NULL; - qp->scq = NULL; - qp->srq = NULL; - - if (srq) - rxe_put(srq); - rxe_put(scq); - rxe_put(rcq); - rxe_put(pd); - +err_out: + /* rxe_qp_cleanup handles the rest */ return err; } @@ -486,7 +538,8 @@ static void rxe_qp_reset(struct rxe_qp *qp) /* stop request/comp */ if (qp->sq.queue) { - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI) rxe_disable_task(&qp->comp.task); rxe_disable_task(&qp->req.task); } @@ -530,7 +583,8 @@ static void rxe_qp_reset(struct rxe_qp *qp) rxe_enable_task(&qp->resp.task); if (qp->sq.queue) { - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI) rxe_enable_task(&qp->comp.task); rxe_enable_task(&qp->req.task); @@ -543,7 +597,8 @@ static void rxe_qp_drain(struct rxe_qp *qp) if (qp->sq.queue) { if (qp->req.state != QP_STATE_DRAINED) { qp->req.state = QP_STATE_DRAIN; - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI) rxe_run_task(&qp->comp.task, 1); else __rxe_do_task(&qp->comp.task); @@ -563,7 +618,7 @@ void rxe_qp_error(struct rxe_qp *qp) /* drain work and packet queues */ rxe_run_task(&qp->resp.task, 1); - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || qp_type(qp) == IB_QPT_XRC_INI) rxe_run_task(&qp->comp.task, 1); else __rxe_do_task(&qp->comp.task); @@ -673,7 +728,8 @@ int rxe_qp_from_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask, qp->attr.sq_psn = (attr->sq_psn & BTH_PSN_MASK); qp->req.psn = qp->attr.sq_psn; qp->comp.psn = qp->attr.sq_psn; - pr_debug("qp#%d set req psn = 0x%x\n", qp_num(qp), qp->req.psn); + pr_debug("qp#%d set req psn = %d comp psn = %d\n", qp_num(qp), + qp->req.psn, qp->comp.psn); } if (mask & IB_QP_PATH_MIG_STATE) @@ -788,7 +844,7 @@ static void rxe_qp_do_cleanup(struct work_struct *work) qp->qp_timeout_jiffies = 0; rxe_cleanup_task(&qp->resp.task); - if (qp_type(qp) == IB_QPT_RC) { + if (qp_type(qp) == IB_QPT_RC || qp_type(qp) == IB_QPT_XRC_INI) { del_timer_sync(&qp->retrans_timer); del_timer_sync(&qp->rnr_nak_timer); } @@ -808,6 +864,9 @@ static void rxe_qp_do_cleanup(struct work_struct *work) if (qp->sq.queue) rxe_queue_cleanup(qp->sq.queue); + if (qp->xrcd) + rxe_put(qp->xrcd); + if (qp->srq) rxe_put(qp->srq); @@ -830,7 +889,7 @@ static void rxe_qp_do_cleanup(struct work_struct *work) if (qp->resp.mr) rxe_put(qp->resp.mr); - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || qp_type(qp) == IB_QPT_XRC_INI) sk_dst_reset(qp->sk->sk); free_rd_atomic_resources(qp); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index ef86f0c5890e..59ba11e52bac 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -416,7 +416,6 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, { int err; struct rxe_dev *rxe = to_rdev(ibqp->device); - struct rxe_pd *pd = to_rpd(ibqp->pd); struct rxe_qp *qp = to_rqp(ibqp); struct rxe_create_qp_resp __user *uresp = NULL; @@ -424,16 +423,7 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, if (udata->outlen < sizeof(*uresp)) return -EINVAL; uresp = udata->outbuf; - } - - if (init->create_flags) - return -EOPNOTSUPP; - err = rxe_qp_chk_init(rxe, init); - if (err) - return err; - - if (udata) { if (udata->inlen) return -EINVAL; @@ -442,11 +432,15 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, qp->is_user = false; } + err = rxe_qp_chk_init(rxe, ibqp, init); + if (err) + return err; + err = rxe_add_to_pool(&rxe->qp_pool, qp); if (err) return err; - err = rxe_qp_from_init(rxe, qp, pd, init, uresp, ibqp->pd, udata); + err = rxe_qp_from_init(rxe, qp, init, uresp, udata); if (err) goto qp_init; @@ -517,6 +511,9 @@ static int validate_send_wr(struct rxe_qp *qp, const struct ib_send_wr *ibwr, int num_sge = ibwr->num_sge; struct rxe_sq *sq = &qp->sq; + if (unlikely(qp_type(qp) == IB_QPT_XRC_TGT)) + return -EOPNOTSUPP; + if (unlikely(num_sge > sq->max_sge)) goto err1; @@ -740,8 +737,9 @@ static int rxe_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, /* Utilize process context to do protocol processing */ rxe_run_task(&qp->req.task, 0); return 0; - } else + } else { return rxe_post_send_kernel(qp, wr, bad_wr); + } } static int rxe_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr, diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 7dab7fa3ba6c..ee482a0569b8 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -230,6 +230,7 @@ struct rxe_qp { struct rxe_srq *srq; struct rxe_cq *scq; struct rxe_cq *rcq; + struct rxe_xrcd *xrcd; enum ib_sig_type sq_sig_type; From patchwork Sat Sep 17 03:11:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12979001 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89594ECAAA1 for ; Sat, 17 Sep 2022 03:11:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229685AbiIQDLw (ORCPT ); Fri, 16 Sep 2022 23:11:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229769AbiIQDLV (ORCPT ); Fri, 16 Sep 2022 23:11:21 -0400 Received: from mail-oa1-x2f.google.com (mail-oa1-x2f.google.com [IPv6:2001:4860:4864:20::2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC05B9DB6C for ; Fri, 16 Sep 2022 20:11:16 -0700 (PDT) Received: by mail-oa1-x2f.google.com with SMTP id 586e51a60fabf-12b542cb1d3so44955109fac.13 for ; Fri, 16 Sep 2022 20:11:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=2sQYsVirRP6MdY15qHzh8WxTTUEdGT3IBBGPdUTLpFI=; b=MQju2xXVMj3zhesztM+J4UOUYJdNCuNrtPQm8KZmR4UWEmd0m/EtNlX0TMiDnE4gOX o7HNTa8VXePLuz6EfdYrUdGSeDc7I2KHtOj3WyU43IsJFoQCrfXbeV+ZZPsnLYBzUWQR HcrC6alQtrmQ9IMos2x+BCJMsJaAqNSD7KPq0/VJU+QPNI1lbIgmlU6igQhNRYiCrkxS dWtKQNN7Mv3xJw1QrzZTXAR/vxeuDGqYXCENjLFJna/brcCfXMCAK8CxUnH07hWbT/lu +601qcqSqyi+wJqbebJbYf33yoQYNylDkWe5jRUVNbYpOcwvV5ei3yihHTcUt3f+UGZ8 3Rig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=2sQYsVirRP6MdY15qHzh8WxTTUEdGT3IBBGPdUTLpFI=; b=TTL76awRRgiXoKCy7bWE9lcMXuS0NATeB8zV0ry76BTduFW3aXhTsMX7k0GaeoyuEl Z0sBWKCvmj1LUpLgVOaX/ZdckKKd9qI+IAFwPC9W/5JuxZeM1qtNUbjmX5dBIdMqEBSX 1Nb/jfEviewRsdqgw2Cqyaa0VbYqNBCrFmA+u3xz5SQcRTqPQpkOJGD8lZzyAOHR1YOb SAfSBSNxTrylfKQc/4sVuiW1E0BPfM37jGdi2JA7oG1vnqEp10TUFl1PMLUs64mluicy OeVGqqlVIOkRM560N5R/uyu6/Da/F7jWdkTZngEt6wbw36H9d79w7hOJTp/NOCmQAZ/c Uzhw== X-Gm-Message-State: ACgBeo26apCR42aiBWO1bs8lsAdQLrXwulnt5PsWktT7AqZoXPNyMleZ aHMP7cSLYdzlq9BRZf5op7U= X-Google-Smtp-Source: AA6agR5EqS42wR4GOmLz5/XB3LRs53V53NycHalSumUrWZ7WdV5LuOi+RpgBF9uLyvzfEpAkQ24Vsg== X-Received: by 2002:a05:6871:29a:b0:127:6381:9bbc with SMTP id i26-20020a056871029a00b0012763819bbcmr10590878oae.77.1663384276233; Fri, 16 Sep 2022 20:11:16 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id q17-20020a4a6c11000000b00475f26931c8sm921422ooc.13.2022.09.16.20.11.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:11:15 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 09/13] RDMA/rxe: Extend rxe_recv.c to support xrc Date: Fri, 16 Sep 2022 22:11:00 -0500 Message-Id: <20220917031104.21222-10-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031104.21222-1-rpearsonhpe@gmail.com> References: <20220917031104.21222-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend rxe_recv.c to support xrc packets. Add checks for qp type and check qp->xrcd matches srq->xrcd. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_hdr.h | 5 +- drivers/infiniband/sw/rxe/rxe_recv.c | 79 +++++++++++++++++++++------- 2 files changed, 63 insertions(+), 21 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_hdr.h b/drivers/infiniband/sw/rxe/rxe_hdr.h index e947bcf75209..fb9959d91b8d 100644 --- a/drivers/infiniband/sw/rxe/rxe_hdr.h +++ b/drivers/infiniband/sw/rxe/rxe_hdr.h @@ -14,7 +14,10 @@ struct rxe_pkt_info { struct rxe_dev *rxe; /* device that owns packet */ struct rxe_qp *qp; /* qp that owns packet */ - struct rxe_send_wqe *wqe; /* send wqe */ + union { + struct rxe_send_wqe *wqe; /* send wqe */ + struct rxe_srq *srq; /* srq for recvd xrc packets */ + }; u8 *hdr; /* points to bth */ u32 mask; /* useful info about pkt */ u32 psn; /* bth psn of packet */ diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index f3ad7b6dbd97..4f35757d3c52 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -13,49 +13,51 @@ static int check_type_state(struct rxe_dev *rxe, struct rxe_pkt_info *pkt, struct rxe_qp *qp) { - unsigned int pkt_type; + unsigned int pkt_type = pkt->opcode & IB_OPCODE_TYPE; if (unlikely(!qp->valid)) - goto err1; + goto err_out; - pkt_type = pkt->opcode & 0xe0; switch (qp_type(qp)) { case IB_QPT_RC: - if (unlikely(pkt_type != IB_OPCODE_RC)) { - pr_warn_ratelimited("bad qp type\n"); - goto err1; - } + if (unlikely(pkt_type != IB_OPCODE_RC)) + goto err_out; break; case IB_QPT_UC: - if (unlikely(pkt_type != IB_OPCODE_UC)) { - pr_warn_ratelimited("bad qp type\n"); - goto err1; - } + if (unlikely(pkt_type != IB_OPCODE_UC)) + goto err_out; break; case IB_QPT_UD: case IB_QPT_GSI: - if (unlikely(pkt_type != IB_OPCODE_UD)) { - pr_warn_ratelimited("bad qp type\n"); - goto err1; - } + if (unlikely(pkt_type != IB_OPCODE_UD)) + goto err_out; + break; + case IB_QPT_XRC_INI: + if (unlikely(pkt_type != IB_OPCODE_XRC)) + goto err_out; + break; + case IB_QPT_XRC_TGT: + if (unlikely(pkt_type != IB_OPCODE_XRC)) + goto err_out; break; default: - pr_warn_ratelimited("unsupported qp type\n"); - goto err1; + goto err_out; } if (pkt->mask & RXE_REQ_MASK) { if (unlikely(qp->resp.state != QP_STATE_READY)) - goto err1; + goto err_out; } else if (unlikely(qp->req.state < QP_STATE_READY || qp->req.state > QP_STATE_DRAINED)) { - goto err1; + goto err_out; } return 0; -err1: +err_out: + pr_debug("%s: failed qp#%d: opcode = 0x%02x\n", __func__, + qp->elem.index, pkt->opcode); return -EINVAL; } @@ -166,6 +168,37 @@ static int check_addr(struct rxe_dev *rxe, struct rxe_pkt_info *pkt, return -EINVAL; } +static int check_xrcd(struct rxe_dev *rxe, struct rxe_pkt_info *pkt, + struct rxe_qp *qp) +{ + int err; + + struct rxe_xrcd *xrcd = qp->xrcd; + u32 srqn = xrceth_srqn(pkt); + struct rxe_srq *srq; + + srq = rxe_pool_get_index(&rxe->srq_pool, srqn); + if (unlikely(!srq)) { + err = -EINVAL; + goto err_out; + } + + if (unlikely(srq->xrcd != xrcd)) { + rxe_put(srq); + err = -EINVAL; + goto err_out; + } + + pkt->srq = srq; + + return 0; + +err_out: + pr_debug("%s: qp#%d: failed err = %d\n", __func__, + qp->elem.index, err); + return err; +} + static int hdr_check(struct rxe_pkt_info *pkt) { struct rxe_dev *rxe = pkt->rxe; @@ -205,6 +238,12 @@ static int hdr_check(struct rxe_pkt_info *pkt) err = check_keys(rxe, pkt, qpn, qp); if (unlikely(err)) goto err2; + + if (qp_type(qp) == IB_QPT_XRC_TGT) { + err = check_xrcd(rxe, pkt, qp); + if (unlikely(err)) + goto err2; + } } else { if (unlikely((pkt->mask & RXE_GRH_MASK) == 0)) { pr_warn_ratelimited("no grh for mcast qpn\n"); From patchwork Sat Sep 17 03:11:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12979006 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA3BCC6FA90 for ; Sat, 17 Sep 2022 03:11:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229593AbiIQDL5 (ORCPT ); Fri, 16 Sep 2022 23:11:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54342 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229773AbiIQDLV (ORCPT ); Fri, 16 Sep 2022 23:11:21 -0400 Received: from mail-ot1-x336.google.com (mail-ot1-x336.google.com [IPv6:2607:f8b0:4864:20::336]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0CF829E0FD for ; Fri, 16 Sep 2022 20:11:18 -0700 (PDT) Received: by mail-ot1-x336.google.com with SMTP id h9-20020a9d5549000000b0063727299bb4so16109046oti.9 for ; Fri, 16 Sep 2022 20:11:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=ku0ZUcIZA3npQmzAHPTmYvCJbqp5m4nqBvFCAYUUG2w=; b=Tz76RYrfDHFNs8y/kJJvV220909bxwBpAVHYDQeWqdBgTfgrM7JITekMSVKzrI+xNl B/3In4DDECpkoM9YFetynRN+q/+PSkMfyl0kOpieDeU8c2ExoGN/ZSW6DQgU7NASj3Tc mxrer4ckB36SUf/a+t0Q9oaf4abNv3C98Q6RnRG5A8OrFvLgAPXSGjqts+RI5p0t21G6 E8N5Ne9NryIlVPIKZaq5A9u+QzZMXLjtUlZQqA13vHB8rPkgcQGWkFBcrb8M4N5yIIA+ nmRLzpyV+vYzSoSZ82A8WTyOcyOn/w1riM49Asrp+i/SqVH0sJ3Rdy7tOdtIAPAAg5Sm 2uxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=ku0ZUcIZA3npQmzAHPTmYvCJbqp5m4nqBvFCAYUUG2w=; b=D20hG0bo7H23jf0bin/hVuIuZ0AX2YwH7bWrC2KyANwuohJefb2FbqfRWaURMvndcX RywmI/RYEuk0pvxe5X//oU5iznoKRo8qZQMDa+b5mZYmQM00CRn5bv/GuAaMJ3k+uio4 bV4kYBM+DCqKZ3sD9xORTUBGQskEmFWwhaoifPehzcunSmzZYTLLeyhEX+YYAl07dFn9 Dr4uItB1np6uqf7XN606SuYoWDmolIeJduQUUv04gwqGQb0SuyZN7QxkzVJzg5+CGaTL 3pnZVnQPJlBMQTxdLTwc2qoJ8CB2VbAv6x1nBRFbBfoeCPDWzrqmETN3wsxfJnM24nNm Xr3w== X-Gm-Message-State: ACrzQf1UZfiROpYwRc8O4R5oel8+5LCj/MVh8jSPx0AN7FEYUjh+RW3D 9+B4JSwC13yCw+1ZYL5OvsL0iIZWd2Q= X-Google-Smtp-Source: AMsMyM4++VijmLVspPdQ5MLNMXHHct6QrLDvH9h5f1sctewsR8FKreb6Ji3guGjH8S/Lw2LIh29zOw== X-Received: by 2002:a05:6830:d86:b0:639:35d9:4b90 with SMTP id bv6-20020a0568300d8600b0063935d94b90mr3698644otb.184.1663384277349; Fri, 16 Sep 2022 20:11:17 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id q17-20020a4a6c11000000b00475f26931c8sm921422ooc.13.2022.09.16.20.11.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:11:16 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 10/13] RDMA/rxe: Extend rxe_comp.c to support xrc qps Date: Fri, 16 Sep 2022 22:11:01 -0500 Message-Id: <20220917031104.21222-11-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031104.21222-1-rpearsonhpe@gmail.com> References: <20220917031104.21222-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend code in rxe_comp.c to support xrc qp types. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 45 ++++++++++++++-------------- 1 file changed, 22 insertions(+), 23 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index 1f10ae4a35d5..cb6621b4055d 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -213,12 +213,13 @@ static inline enum comp_state check_ack(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct rxe_send_wqe *wqe) { + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); unsigned int mask = pkt->mask; + int opcode; u8 syn; - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); - /* Check the sequence only */ - switch (qp->comp.opcode) { + /* Mask off type bits and check the sequence only */ + switch (qp->comp.opcode & IB_OPCODE_CMD) { case -1: /* Will catch all *_ONLY cases. */ if (!(mask & RXE_FIRST_MASK)) @@ -226,42 +227,39 @@ static inline enum comp_state check_ack(struct rxe_qp *qp, break; - case IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST: - case IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE: - if (pkt->opcode != IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE && - pkt->opcode != IB_OPCODE_RC_RDMA_READ_RESPONSE_LAST) { + case IB_OPCODE_RDMA_READ_RESPONSE_FIRST: + case IB_OPCODE_RDMA_READ_RESPONSE_MIDDLE: + opcode = pkt->opcode & IB_OPCODE_CMD; + if (opcode != IB_OPCODE_RDMA_READ_RESPONSE_MIDDLE && + opcode != IB_OPCODE_RDMA_READ_RESPONSE_LAST) { /* read retries of partial data may restart from * read response first or response only. */ if ((pkt->psn == wqe->first_psn && - pkt->opcode == - IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST) || + opcode == IB_OPCODE_RDMA_READ_RESPONSE_FIRST) || (wqe->first_psn == wqe->last_psn && - pkt->opcode == - IB_OPCODE_RC_RDMA_READ_RESPONSE_ONLY)) + opcode == IB_OPCODE_RDMA_READ_RESPONSE_ONLY)) break; return COMPST_ERROR; } break; default: - WARN_ON_ONCE(1); + //WARN_ON_ONCE(1); } - /* Check operation validity. */ - switch (pkt->opcode) { - case IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST: - case IB_OPCODE_RC_RDMA_READ_RESPONSE_LAST: - case IB_OPCODE_RC_RDMA_READ_RESPONSE_ONLY: + /* Mask off the type bits and check operation validity. */ + switch (pkt->opcode & IB_OPCODE_CMD) { + case IB_OPCODE_RDMA_READ_RESPONSE_FIRST: + case IB_OPCODE_RDMA_READ_RESPONSE_LAST: + case IB_OPCODE_RDMA_READ_RESPONSE_ONLY: syn = aeth_syn(pkt); if ((syn & AETH_TYPE_MASK) != AETH_ACK) return COMPST_ERROR; fallthrough; - /* (IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE doesn't have an AETH) - */ - case IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE: + case IB_OPCODE_RDMA_READ_RESPONSE_MIDDLE: if (wqe->wr.opcode != IB_WR_RDMA_READ && wqe->wr.opcode != IB_WR_RDMA_READ_WITH_INV) { wqe->status = IB_WC_FATAL_ERR; @@ -270,7 +268,7 @@ static inline enum comp_state check_ack(struct rxe_qp *qp, reset_retry_counters(qp); return COMPST_READ; - case IB_OPCODE_RC_ATOMIC_ACKNOWLEDGE: + case IB_OPCODE_ATOMIC_ACKNOWLEDGE: syn = aeth_syn(pkt); if ((syn & AETH_TYPE_MASK) != AETH_ACK) @@ -282,7 +280,7 @@ static inline enum comp_state check_ack(struct rxe_qp *qp, reset_retry_counters(qp); return COMPST_ATOMIC; - case IB_OPCODE_RC_ACKNOWLEDGE: + case IB_OPCODE_ACKNOWLEDGE: syn = aeth_syn(pkt); switch (syn & AETH_TYPE_MASK) { case AETH_ACK: @@ -669,7 +667,8 @@ int rxe_completer(void *arg) * timeouts but try to keep them as few as possible) * (4) the timeout parameter is set */ - if ((qp_type(qp) == IB_QPT_RC) && + if ((qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI) && (qp->req.state == QP_STATE_READY) && (psn_compare(qp->req.psn, qp->comp.psn) > 0) && qp->qp_timeout_jiffies) From patchwork Sat Sep 17 03:11:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12979002 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 031C3ECAAD8 for ; Sat, 17 Sep 2022 03:11:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229688AbiIQDLx (ORCPT ); Fri, 16 Sep 2022 23:11:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229623AbiIQDLW (ORCPT ); Fri, 16 Sep 2022 23:11:22 -0400 Received: from mail-ot1-x32b.google.com (mail-ot1-x32b.google.com [IPv6:2607:f8b0:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11685BD131 for ; Fri, 16 Sep 2022 20:11:19 -0700 (PDT) Received: by mail-ot1-x32b.google.com with SMTP id w10-20020a056830410a00b00655d70a1aeaso3735672ott.3 for ; Fri, 16 Sep 2022 20:11:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=pmXKEYyCNniaE+BlAFgYZOoUvItL4ct2QR9ZHK2YPsY=; b=Lyu4eq4z5AC2lHQaDfwGhtqfQaNFXwD6WzZM+x8/JWz3hcpi6R+PzXBWLRWqsPOvWw jtbBdTrl67Thax0y6XHukmbKwW1brX1rW1WepWnFx6txaYmRDTCMHbbFdNa5LwzHH8c2 jYotEArdodUwcNgQ8HYbsuQa/mRz4VUwmYvwvK3LFkYfsa8Wc6YTWKAABXtX7Z9Mup7Y HbuixmSHvViVNu9dV0SGN4STrBAkw1cirOlPZs1EwRvKeVxg+0obSwVAOTJoOdM8cZEs Lz4YgVPnv6br4wRxMtO450csT11R9IgM789YLWb7DDmyBJopn1+/u2DRCnwcfaBeZqk9 re/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=pmXKEYyCNniaE+BlAFgYZOoUvItL4ct2QR9ZHK2YPsY=; b=KnQGxp0paQWaoGR37/SCfRavJ7/tt2m6OejA1GMZpa2Unqfag2WrpC32vZQgWLkzwT aFAnZ+/6fz5wFHwX9DACR7JehPdW35xdjUYCUgSClMxyPjgnPQgOqJn4kPAgPERJmadc xBV7bXp5/eoH5PKVksIx4P8ljdY3j6WWJoytC5m0s4QaLXOooyKuaCdJm4dazMGpyFXX f9Pk0RpnTrgndm2baoExSOKrZ/U/j2j3tvogVlE35Og79np9grH60cTmoRux0PeRXMZP CmhgFrGj8T46cuog7dr+TkzpxkWCEpd2yLwO3dkoAFTUw1iKT2esXWBGtj8EpDjFWNrx HO8g== X-Gm-Message-State: ACrzQf0cpohzk4ftvlvdvlipuQDDlxTenW+MyDL8PDada8yLODqhmhp8 gCRZGkjI5K1og2A6Z9pggDg= X-Google-Smtp-Source: AMsMyM5p37aNwpquMqvJaPeAPQBhZ3WeusMdr9IWMOP9C/2Ky8GkPTQWO1ogHComC9cqzcqB34m+4A== X-Received: by 2002:a05:6830:4193:b0:656:d1eb:a000 with SMTP id r19-20020a056830419300b00656d1eba000mr3609896otu.109.1663384278385; Fri, 16 Sep 2022 20:11:18 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id q17-20020a4a6c11000000b00475f26931c8sm921422ooc.13.2022.09.16.20.11.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:11:17 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 11/13] RDMA/rxe: Extend rxe_req.c to support xrc qps Date: Fri, 16 Sep 2022 22:11:02 -0500 Message-Id: <20220917031104.21222-12-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031104.21222-1-rpearsonhpe@gmail.com> References: <20220917031104.21222-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend code in rxe_req.c to support xrc qp types. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_req.c | 38 +++++++++++++++++------------ 1 file changed, 22 insertions(+), 16 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index d2a9abfed596..e7bb969f97f3 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -229,7 +229,7 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct sk_buff *skb; - struct rxe_send_wr *ibwr = &wqe->wr; + struct rxe_send_wr *wr = &wqe->wr; int pad = (-payload) & 0x3; int paylen; int solicited; @@ -246,13 +246,13 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, return NULL; /* init bth */ - solicited = (ibwr->send_flags & IB_SEND_SOLICITED) && + solicited = (wr->send_flags & IB_SEND_SOLICITED) && (pkt->mask & RXE_LAST_MASK) && ((pkt->mask & (RXE_SEND_MASK)) || (pkt->mask & (RXE_WRITE_MASK | RXE_IMMDT_MASK)) == (RXE_WRITE_MASK | RXE_IMMDT_MASK)); - qp_num = (pkt->mask & RXE_DETH_MASK) ? ibwr->wr.ud.remote_qpn : + qp_num = (pkt->mask & RXE_DETH_MASK) ? wr->wr.ud.remote_qpn : qp->attr.dest_qp_num; ack_req = ((pkt->mask & RXE_LAST_MASK) || @@ -264,34 +264,37 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, ack_req, pkt->psn); /* init optional headers */ + if (pkt->mask & RXE_XRCETH_MASK) + xrceth_set_srqn(pkt, wr->srq_num); + if (pkt->mask & RXE_RETH_MASK) { - reth_set_rkey(pkt, ibwr->wr.rdma.rkey); + reth_set_rkey(pkt, wr->wr.rdma.rkey); reth_set_va(pkt, wqe->iova); reth_set_len(pkt, wqe->dma.resid); } if (pkt->mask & RXE_IMMDT_MASK) - immdt_set_imm(pkt, ibwr->ex.imm_data); + immdt_set_imm(pkt, wr->ex.imm_data); if (pkt->mask & RXE_IETH_MASK) - ieth_set_rkey(pkt, ibwr->ex.invalidate_rkey); + ieth_set_rkey(pkt, wr->ex.invalidate_rkey); if (pkt->mask & RXE_ATMETH_MASK) { atmeth_set_va(pkt, wqe->iova); - if (opcode == IB_OPCODE_RC_COMPARE_SWAP) { - atmeth_set_swap_add(pkt, ibwr->wr.atomic.swap); - atmeth_set_comp(pkt, ibwr->wr.atomic.compare_add); + if ((opcode & IB_OPCODE_CMD) == IB_OPCODE_COMPARE_SWAP) { + atmeth_set_swap_add(pkt, wr->wr.atomic.swap); + atmeth_set_comp(pkt, wr->wr.atomic.compare_add); } else { - atmeth_set_swap_add(pkt, ibwr->wr.atomic.compare_add); + atmeth_set_swap_add(pkt, wr->wr.atomic.compare_add); } - atmeth_set_rkey(pkt, ibwr->wr.atomic.rkey); + atmeth_set_rkey(pkt, wr->wr.atomic.rkey); } if (pkt->mask & RXE_DETH_MASK) { if (qp->ibqp.qp_num == 1) deth_set_qkey(pkt, GSI_QKEY); else - deth_set_qkey(pkt, ibwr->wr.ud.remote_qkey); + deth_set_qkey(pkt, wr->wr.ud.remote_qkey); deth_set_sqp(pkt, qp->ibqp.qp_num); } @@ -338,8 +341,10 @@ static void update_wqe_state(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { if (pkt->mask & RXE_LAST_MASK) { - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI) wqe->state = wqe_state_pending; + /* other qp types handled in rxe_xmit_packet() */ } else { wqe->state = wqe_state_processing; } @@ -532,9 +537,10 @@ int rxe_requester(void *arg) goto done; } - if (unlikely(qp_type(qp) == IB_QPT_RC && - psn_compare(qp->req.psn, (qp->comp.psn + - RXE_MAX_UNACKED_PSNS)) > 0)) { + if (unlikely((qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI) && + psn_compare(qp->req.psn, (qp->comp.psn + + RXE_MAX_UNACKED_PSNS)) > 0)) { qp->req.wait_psn = 1; goto exit; } From patchwork Sat Sep 17 03:11:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12979003 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2AEFECAAA1 for ; Sat, 17 Sep 2022 03:11:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229600AbiIQDLy (ORCPT ); Fri, 16 Sep 2022 23:11:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54928 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229825AbiIQDLW (ORCPT ); Fri, 16 Sep 2022 23:11:22 -0400 Received: from mail-oa1-x32.google.com (mail-oa1-x32.google.com [IPv6:2001:4860:4864:20::32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F0BC4DF25 for ; Fri, 16 Sep 2022 20:11:20 -0700 (PDT) Received: by mail-oa1-x32.google.com with SMTP id 586e51a60fabf-12c8312131fso4530250fac.4 for ; Fri, 16 Sep 2022 20:11:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=Qv+MFCfjBy9qlntNACjCRozVQRuJY60ehMS3GyMG2/8=; b=ixdvqOTXOHRkBAZgeXHmG4i/sUug4F3bSGOFmPP20d8lB6m+KmMF6LBHy/1niEEZBy 5A3hg70lifLhDV6q8IJiinrmcW3ZuKFerx14tInHj8jVXTPF70PAwunl1rEr79wd211K Hacy1uy5yal1YbINp4+waO0xU4HgCIM8vE36JYqQSRrrsU2UB3yoXOyr94jP19j7K2yr nWHGNgqCCPJL8sDq0ktXQ7BcSWZbtyDrgCDK0fe6joz5TmS8xgtpO/jhE93cvdrasCt3 apU/Wli83rChIJA1ufe92v3BZWHJF17WUFI9ylIOOHBRNxbFaU0BYqhDlFDXGW8iTMM3 O9iA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=Qv+MFCfjBy9qlntNACjCRozVQRuJY60ehMS3GyMG2/8=; b=Vb7NYJHpR5DyJRU6dG7yiJbBli5/880CIzHeRzOJewbYI/Z+SianOvdEqu/4vPtkSU p3N+mIALD+ojfQyvrDfESjIk0zj0jrQwvmfG7oIUU/02KQrEF3rEfKnrf9Le/N23sGWi IFHAiS2GPA215MlI11ZvdSFeqcn/RnLOrC7Z62jfvm4cVwrhG5qapypQfG/KFzpY6hWg sMFTc0UeHQmkY77VhndAYkbDMYCowNO0rTCniaFAeRX+8RuqTulS5oMh+ePfPcIesPHl 2XPBn21wMUxyOMm9qCmurRo2Ym66zQJMDN8zo0mvnd0srqsfvkROwFC9WXxKiZYuEZwV erQQ== X-Gm-Message-State: ACrzQf0zghCmX9VCq8yl5wvcaKNFHnsbUh/1Sop3QIneYRE23tOPiUsu G3fJ6BPiWOEY2+QcJ+C74WE= X-Google-Smtp-Source: AMsMyM527px44S7PwMcOFSVxE7sAeFgHKG/yi7pABUjaROyVJCD2fj7QyrqXC215LPfJ/JylBr+yPg== X-Received: by 2002:a05:6870:c14f:b0:12b:1ca6:8f4a with SMTP id g15-20020a056870c14f00b0012b1ca68f4amr4676226oad.90.1663384279385; Fri, 16 Sep 2022 20:11:19 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id q17-20020a4a6c11000000b00475f26931c8sm921422ooc.13.2022.09.16.20.11.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:11:18 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 12/13] RDMA/rxe: Extend rxe_net.c to support xrc qps Date: Fri, 16 Sep 2022 22:11:03 -0500 Message-Id: <20220917031104.21222-13-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031104.21222-1-rpearsonhpe@gmail.com> References: <20220917031104.21222-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend code in rxe_net.c to support xrc qp types. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_net.c | 23 ++++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index d46190ad082f..d9bedd6fc497 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -92,7 +92,7 @@ static struct dst_entry *rxe_find_route(struct net_device *ndev, { struct dst_entry *dst = NULL; - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || qp_type(qp) == IB_QPT_XRC_INI) dst = sk_dst_get(qp->sk->sk); if (!dst || !dst_check(dst, qp->dst_cookie)) { @@ -120,7 +120,8 @@ static struct dst_entry *rxe_find_route(struct net_device *ndev, #endif } - if (dst && (qp_type(qp) == IB_QPT_RC)) { + if (dst && (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI)) { dst_hold(dst); sk_dst_set(qp->sk->sk, dst); } @@ -386,14 +387,23 @@ static int rxe_send(struct sk_buff *skb, struct rxe_pkt_info *pkt) */ static int rxe_loopback(struct sk_buff *skb, struct rxe_pkt_info *pkt) { - memcpy(SKB_TO_PKT(skb), pkt, sizeof(*pkt)); + struct rxe_pkt_info *new_pkt = SKB_TO_PKT(skb); + + memset(new_pkt, 0, sizeof(*new_pkt)); + + /* match rxe_udp_encap_recv */ + new_pkt->rxe = pkt->rxe; + new_pkt->port_num = 1; + new_pkt->hdr = pkt->hdr; + new_pkt->mask = RXE_GRH_MASK; + new_pkt->paylen = pkt->paylen; if (skb->protocol == htons(ETH_P_IP)) skb_pull(skb, sizeof(struct iphdr)); else skb_pull(skb, sizeof(struct ipv6hdr)); - if (WARN_ON(!ib_device_try_get(&pkt->rxe->ib_dev))) { + if (WARN_ON(!ib_device_try_get(&new_pkt->rxe->ib_dev))) { kfree_skb(skb); return -EIO; } @@ -412,7 +422,6 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, if ((is_request && (qp->req.state != QP_STATE_READY)) || (!is_request && (qp->resp.state != QP_STATE_READY))) { - pr_info("Packet dropped. QP is not in ready state\n"); goto drop; } @@ -427,8 +436,8 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, return err; } - if ((qp_type(qp) != IB_QPT_RC) && - (pkt->mask & RXE_LAST_MASK)) { + if ((pkt->mask & RXE_REQ_MASK) && (pkt->mask & RXE_LAST_MASK) && + (qp_type(qp) != IB_QPT_RC && qp_type(qp) != IB_QPT_XRC_INI)) { pkt->wqe->state = wqe_state_done; rxe_run_task(&qp->comp.task, 1); } From patchwork Sat Sep 17 03:11:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12979000 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81E37C6FA82 for ; Sat, 17 Sep 2022 03:11:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229586AbiIQDLv (ORCPT ); Fri, 16 Sep 2022 23:11:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229833AbiIQDLW (ORCPT ); Fri, 16 Sep 2022 23:11:22 -0400 Received: from mail-ot1-x330.google.com (mail-ot1-x330.google.com [IPv6:2607:f8b0:4864:20::330]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7A17BD132 for ; Fri, 16 Sep 2022 20:11:20 -0700 (PDT) Received: by mail-ot1-x330.google.com with SMTP id br15-20020a056830390f00b0061c9d73b8bdso16102436otb.6 for ; Fri, 16 Sep 2022 20:11:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=Ui/s4dF0I6Izv/NUMyg9qVP6krfiZ0OXxFPfLW8eZPo=; b=pNPWxwPwC71W0dm1J4LOKOlw3JaL9ZU9oRrit6y1Lmq0qctkv81Y3H6ty4eR2agJjb GFXZXgGlfjLm3fPt8aEPAorOUWylAHl8MnaCntZX5gM10b7zGVnvweJGVu1EcyFYikbn kT8GnhVRv8l+FhDSM3vWc9hpH+zzZ6shMtWzuLkHKi8SpdDwdc4h6m21c6Yyi4eB1tz4 2TVoImD/OGQuJTpDodauha91AaozLkVv2vTDObTkjRo1oA6qxdNuFRi3ioU9yuphbN3K XsFPiHRnK3lUHSIn8WpWLb1UpjV+6KjtxGEYwHrNQhtisHECWUT+JneCv0zsKhAvWMkA m1iQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=Ui/s4dF0I6Izv/NUMyg9qVP6krfiZ0OXxFPfLW8eZPo=; b=70BCDoZVerMWWwss08ZZAgmBLURJyOLAnN1+rVPDqM7f+rgq2XL4mhGcBTtFuGSzTi sRAV00iNGwLndM1Id7QrSvYoS2HLO7yTs0IjOR9yBUE+idzlPtQptFlhlZ2ups2I4r7L kwwD46sOtoD027BssKeFGzc5lnevVD34h57D402CKT4+T7xvz2poD7uRYzof1TqD9+/J UCOqduir0/YfueFNLg0RYj+k3MspZJVkBK/1PPum7E1bREI5Zwzmcms/9b0JRfuTLOkx hEVaOwhZ3l16rXHf698LwWue5kAOpnOHYAVOJ9zbqx/ekMT+oIEzPHgnNqc+I8Fzr7/f Mzxg== X-Gm-Message-State: ACrzQf01w/89UBsS/aa8LEkrk9Haws8ApOjduo/pxnCc7IrFLViMaeqT rngiiD7AdKVEZaFbJ8+AI2E= X-Google-Smtp-Source: AMsMyM7bqSov08aEZR5dlcyl8p8rQxuYcfb7gZGjGV9TVBuZKQHyBN+z0X5/BaumuX9Ciq8GXFOH1Q== X-Received: by 2002:a9d:6b0e:0:b0:657:1e3:7251 with SMTP id g14-20020a9d6b0e000000b0065701e37251mr3507617otp.267.1663384280420; Fri, 16 Sep 2022 20:11:20 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id q17-20020a4a6c11000000b00475f26931c8sm921422ooc.13.2022.09.16.20.11.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:11:19 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 13/13] RDMA/rxe: Extend rxe_resp.c to support xrc qps Date: Fri, 16 Sep 2022 22:11:04 -0500 Message-Id: <20220917031104.21222-14-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031104.21222-1-rpearsonhpe@gmail.com> References: <20220917031104.21222-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend code in rxe_resp.c to support xrc qp types. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 3 +- drivers/infiniband/sw/rxe/rxe_mw.c | 14 +-- drivers/infiniband/sw/rxe/rxe_resp.c | 161 +++++++++++++++++++++------ 3 files changed, 138 insertions(+), 40 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index c6fb93a749ad..9381c76bff87 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -87,7 +87,8 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata); int rxe_dealloc_mw(struct ib_mw *ibmw); int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe); int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey); -struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey); +struct rxe_mw *rxe_lookup_mw(struct rxe_pd *pd, struct rxe_qp *qp, + int access, u32 rkey); void rxe_mw_cleanup(struct rxe_pool_elem *elem); /* rxe_net.c */ diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 104993801a80..2a7493526ec2 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -280,10 +280,10 @@ int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey) return ret; } -struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey) +struct rxe_mw *rxe_lookup_mw(struct rxe_pd *pd, struct rxe_qp *qp, + int access, u32 rkey) { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); - struct rxe_pd *pd = to_rpd(qp->ibqp.pd); struct rxe_mw *mw; int index = rkey >> 8; @@ -291,11 +291,11 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey) if (!mw) return NULL; - if (unlikely((mw->rkey != rkey) || rxe_mw_pd(mw) != pd || - (mw->ibmw.type == IB_MW_TYPE_2 && mw->qp != qp) || - (mw->length == 0) || - (access && !(access & mw->access)) || - mw->state != RXE_MW_STATE_VALID)) { + if ((mw->rkey != rkey) || rxe_mw_pd(mw) != pd || + (mw->ibmw.type == IB_MW_TYPE_2 && + (mw->qp != qp || qp_type(qp) == IB_QPT_XRC_TGT)) || + (mw->length == 0) || (access && !(access & mw->access)) || + mw->state != RXE_MW_STATE_VALID) { rxe_put(mw); return NULL; } diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index cb560cbe418d..b0a97074bc5a 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -88,7 +88,8 @@ void rxe_resp_queue_pkt(struct rxe_qp *qp, struct sk_buff *skb) skb_queue_tail(&qp->req_pkts, skb); - must_sched = (pkt->opcode == IB_OPCODE_RC_RDMA_READ_REQUEST) || + /* mask off opcode type bits */ + must_sched = ((pkt->opcode & 0x1f) == IB_OPCODE_RDMA_READ_REQUEST) || (skb_queue_len(&qp->req_pkts) > 1); rxe_run_task(&qp->resp.task, must_sched); @@ -127,6 +128,7 @@ static enum resp_states check_psn(struct rxe_qp *qp, switch (qp_type(qp)) { case IB_QPT_RC: + case IB_QPT_XRC_TGT: if (diff > 0) { if (qp->resp.sent_psn_nak) return RESPST_CLEANUP; @@ -156,6 +158,7 @@ static enum resp_states check_psn(struct rxe_qp *qp, return RESPST_CLEANUP; } break; + default: break; } @@ -248,6 +251,47 @@ static enum resp_states check_op_seq(struct rxe_qp *qp, } break; + case IB_QPT_XRC_TGT: + switch (qp->resp.opcode) { + case IB_OPCODE_XRC_SEND_FIRST: + case IB_OPCODE_XRC_SEND_MIDDLE: + switch (pkt->opcode) { + case IB_OPCODE_XRC_SEND_MIDDLE: + case IB_OPCODE_XRC_SEND_LAST: + case IB_OPCODE_XRC_SEND_LAST_WITH_IMMEDIATE: + case IB_OPCODE_XRC_SEND_LAST_WITH_INVALIDATE: + return RESPST_CHK_OP_VALID; + default: + return RESPST_ERR_MISSING_OPCODE_LAST_C; + } + + case IB_OPCODE_XRC_RDMA_WRITE_FIRST: + case IB_OPCODE_XRC_RDMA_WRITE_MIDDLE: + switch (pkt->opcode) { + case IB_OPCODE_XRC_RDMA_WRITE_MIDDLE: + case IB_OPCODE_XRC_RDMA_WRITE_LAST: + case IB_OPCODE_XRC_RDMA_WRITE_LAST_WITH_IMMEDIATE: + return RESPST_CHK_OP_VALID; + default: + return RESPST_ERR_MISSING_OPCODE_LAST_C; + } + + default: + switch (pkt->opcode) { + case IB_OPCODE_XRC_SEND_MIDDLE: + case IB_OPCODE_XRC_SEND_LAST: + case IB_OPCODE_XRC_SEND_LAST_WITH_IMMEDIATE: + case IB_OPCODE_XRC_SEND_LAST_WITH_INVALIDATE: + case IB_OPCODE_XRC_RDMA_WRITE_MIDDLE: + case IB_OPCODE_XRC_RDMA_WRITE_LAST: + case IB_OPCODE_XRC_RDMA_WRITE_LAST_WITH_IMMEDIATE: + return RESPST_ERR_MISSING_OPCODE_FIRST; + default: + return RESPST_CHK_OP_VALID; + } + } + break; + default: return RESPST_CHK_OP_VALID; } @@ -258,6 +302,7 @@ static enum resp_states check_op_valid(struct rxe_qp *qp, { switch (qp_type(qp)) { case IB_QPT_RC: + case IB_QPT_XRC_TGT: if (((pkt->mask & RXE_READ_MASK) && !(qp->attr.qp_access_flags & IB_ACCESS_REMOTE_READ)) || ((pkt->mask & RXE_WRITE_MASK) && @@ -290,9 +335,22 @@ static enum resp_states check_op_valid(struct rxe_qp *qp, return RESPST_CHK_RESOURCE; } -static enum resp_states get_srq_wqe(struct rxe_qp *qp) +static struct rxe_srq *get_srq(struct rxe_qp *qp, struct rxe_pkt_info *pkt) +{ + struct rxe_srq *srq; + + if (qp_type(qp) == IB_QPT_XRC_TGT) + srq = pkt->srq; + else if (qp->srq) + srq = qp->srq; + else + srq = NULL; + + return srq; +} + +static enum resp_states get_srq_wqe(struct rxe_qp *qp, struct rxe_srq *srq) { - struct rxe_srq *srq = qp->srq; struct rxe_queue *q = srq->rq.queue; struct rxe_recv_wqe *wqe; struct ib_event ev; @@ -344,7 +402,7 @@ static enum resp_states get_srq_wqe(struct rxe_qp *qp) static enum resp_states check_resource(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { - struct rxe_srq *srq = qp->srq; + struct rxe_srq *srq = get_srq(qp, pkt); if (qp->resp.state == QP_STATE_ERROR) { if (qp->resp.wqe) { @@ -377,7 +435,7 @@ static enum resp_states check_resource(struct rxe_qp *qp, if (pkt->mask & RXE_RWR_MASK) { if (srq) - return get_srq_wqe(qp); + return get_srq_wqe(qp, srq); qp->resp.wqe = queue_head(qp->rq.queue, QUEUE_TYPE_FROM_CLIENT); @@ -387,6 +445,7 @@ static enum resp_states check_resource(struct rxe_qp *qp, return RESPST_CHK_LENGTH; } +/* TODO this should actually do what it says per IBA spec */ static enum resp_states check_length(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { @@ -397,6 +456,9 @@ static enum resp_states check_length(struct rxe_qp *qp, case IB_QPT_UC: return RESPST_CHK_RKEY; + case IB_QPT_XRC_TGT: + return RESPST_CHK_RKEY; + default: return RESPST_CHK_RKEY; } @@ -407,6 +469,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, { struct rxe_mr *mr = NULL; struct rxe_mw *mw = NULL; + struct rxe_pd *pd; u64 va; u32 rkey; u32 resid; @@ -447,8 +510,11 @@ static enum resp_states check_rkey(struct rxe_qp *qp, resid = qp->resp.resid; pktlen = payload_size(pkt); + /* we have ref counts on qp and pkt->srq so this is just a temp */ + pd = (qp_type(qp) == IB_QPT_XRC_TGT) ? pkt->srq->pd : qp->pd; + if (rkey_is_mw(rkey)) { - mw = rxe_lookup_mw(qp, access, rkey); + mw = rxe_lookup_mw(pd, qp, access, rkey); if (!mw) { pr_debug("%s: no MW matches rkey %#x\n", __func__, rkey); @@ -469,7 +535,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, rxe_put(mw); rxe_get(mr); } else { - mr = lookup_mr(qp->pd, access, rkey, RXE_LOOKUP_REMOTE); + mr = lookup_mr(pd, access, rkey, RXE_LOOKUP_REMOTE); if (!mr) { pr_debug("%s: no MR matches rkey %#x\n", __func__, rkey); @@ -518,12 +584,12 @@ static enum resp_states check_rkey(struct rxe_qp *qp, return state; } -static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, - int data_len) +static enum resp_states send_data_in(struct rxe_pd *pd, struct rxe_qp *qp, + void *data_addr, int data_len) { int err; - err = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, + err = copy_data(pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, data_addr, data_len, RXE_TO_MR_OBJ); if (unlikely(err)) return (err == -ENOSPC) ? RESPST_ERR_LENGTH @@ -627,7 +693,8 @@ static enum resp_states atomic_reply(struct rxe_qp *qp, spin_lock_bh(&atomic_ops_lock); res->atomic.orig_val = value = *vaddr; - if (pkt->opcode == IB_OPCODE_RC_COMPARE_SWAP) { + if (pkt->opcode == IB_OPCODE_RC_COMPARE_SWAP || + pkt->opcode == IB_OPCODE_XRC_COMPARE_SWAP) { if (value == atmeth_comp(pkt)) value = atmeth_swap_add(pkt); } else { @@ -786,24 +853,30 @@ static enum resp_states read_reply(struct rxe_qp *qp, } if (res->read.resid <= mtu) - opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_ONLY; + opcode = IB_OPCODE_RDMA_READ_RESPONSE_ONLY; else - opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST; + opcode = IB_OPCODE_RDMA_READ_RESPONSE_FIRST; } else { mr = rxe_recheck_mr(qp, res->read.rkey); if (!mr) return RESPST_ERR_RKEY_VIOLATION; if (res->read.resid > mtu) - opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE; + opcode = IB_OPCODE_RDMA_READ_RESPONSE_MIDDLE; else - opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_LAST; + opcode = IB_OPCODE_RDMA_READ_RESPONSE_LAST; } res->state = rdatm_res_state_next; payload = min_t(int, res->read.resid, mtu); + /* fixup opcode type */ + if (qp_type(qp) == IB_QPT_XRC_TGT) + opcode |= IB_OPCODE_XRC; + else + opcode |= IB_OPCODE_RC; + skb = prepare_ack_packet(qp, &ack_pkt, opcode, payload, res->cur_psn, AETH_ACK_UNLIMITED); if (!skb) @@ -858,6 +931,8 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt) enum resp_states err; struct sk_buff *skb = PKT_TO_SKB(pkt); union rdma_network_hdr hdr; + struct rxe_pd *pd = (qp_type(qp) == IB_QPT_XRC_TGT) ? + pkt->srq->pd : qp->pd; if (pkt->mask & RXE_SEND_MASK) { if (qp_type(qp) == IB_QPT_UD || @@ -867,15 +942,15 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt) sizeof(hdr.reserved)); memcpy(&hdr.roce4grh, ip_hdr(skb), sizeof(hdr.roce4grh)); - err = send_data_in(qp, &hdr, sizeof(hdr)); + err = send_data_in(pd, qp, &hdr, sizeof(hdr)); } else { - err = send_data_in(qp, ipv6_hdr(skb), + err = send_data_in(pd, qp, ipv6_hdr(skb), sizeof(hdr)); } if (err) return err; } - err = send_data_in(qp, payload_addr(pkt), payload_size(pkt)); + err = send_data_in(pd, qp, payload_addr(pkt), payload_size(pkt)); if (err) return err; } else if (pkt->mask & RXE_WRITE_MASK) { @@ -914,7 +989,7 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt) if (pkt->mask & RXE_COMP_MASK) return RESPST_COMPLETE; - else if (qp_type(qp) == IB_QPT_RC) + else if (qp_type(qp) == IB_QPT_RC || qp_type(qp) == IB_QPT_XRC_TGT) return RESPST_ACKNOWLEDGE; else return RESPST_CLEANUP; @@ -928,13 +1003,21 @@ static enum resp_states do_complete(struct rxe_qp *qp, struct ib_uverbs_wc *uwc = &cqe.uibwc; struct rxe_recv_wqe *wqe = qp->resp.wqe; struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + struct rxe_cq *cq; + struct rxe_srq *srq; if (!wqe) goto finish; memset(&cqe, 0, sizeof(cqe)); - if (qp->rcq->is_user) { + /* srq and cq if != 0 are protected by references held by qp or pkt */ + srq = (qp_type(qp) == IB_QPT_XRC_TGT) ? pkt->srq : qp->srq; + cq = (qp_type(qp) == IB_QPT_XRC_TGT) ? pkt->srq->cq : qp->rcq; + + WARN_ON(!cq); + + if (cq->is_user) { uwc->status = qp->resp.status; uwc->qp_num = qp->ibqp.qp_num; uwc->wr_id = wqe->wr_id; @@ -956,7 +1039,7 @@ static enum resp_states do_complete(struct rxe_qp *qp, /* fields after byte_len are different between kernel and user * space */ - if (qp->rcq->is_user) { + if (cq->is_user) { uwc->wc_flags = IB_WC_GRH; if (pkt->mask & RXE_IMMDT_MASK) { @@ -1005,12 +1088,13 @@ static enum resp_states do_complete(struct rxe_qp *qp, } /* have copy for srq and reference for !srq */ - if (!qp->srq) + if (!srq) queue_advance_consumer(qp->rq.queue, QUEUE_TYPE_FROM_CLIENT); qp->resp.wqe = NULL; - if (rxe_cq_post(qp->rcq, &cqe, pkt ? bth_se(pkt) : 1)) + /* either qp or srq is holding a reference to cq */ + if (rxe_cq_post(cq, &cqe, pkt ? bth_se(pkt) : 1)) return RESPST_ERR_CQ_OVERFLOW; finish: @@ -1018,7 +1102,7 @@ static enum resp_states do_complete(struct rxe_qp *qp, return RESPST_CHK_RESOURCE; if (unlikely(!pkt)) return RESPST_DONE; - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || qp_type(qp) == IB_QPT_XRC_TGT) return RESPST_ACKNOWLEDGE; else return RESPST_CLEANUP; @@ -1029,9 +1113,13 @@ static int send_ack(struct rxe_qp *qp, u8 syndrome, u32 psn) int err = 0; struct rxe_pkt_info ack_pkt; struct sk_buff *skb; + int opcode; + + opcode = (qp_type(qp) == IB_QPT_XRC_TGT) ? + IB_OPCODE_XRC_ACKNOWLEDGE : + IB_OPCODE_RC_ACKNOWLEDGE; - skb = prepare_ack_packet(qp, &ack_pkt, IB_OPCODE_RC_ACKNOWLEDGE, - 0, psn, syndrome); + skb = prepare_ack_packet(qp, &ack_pkt, opcode, 0, psn, syndrome); if (!skb) { err = -ENOMEM; goto err1; @@ -1050,9 +1138,13 @@ static int send_atomic_ack(struct rxe_qp *qp, u8 syndrome, u32 psn) int err = 0; struct rxe_pkt_info ack_pkt; struct sk_buff *skb; + int opcode; + + opcode = (qp_type(qp) == IB_QPT_XRC_TGT) ? + IB_OPCODE_XRC_ATOMIC_ACKNOWLEDGE : + IB_OPCODE_RC_ATOMIC_ACKNOWLEDGE; - skb = prepare_ack_packet(qp, &ack_pkt, IB_OPCODE_RC_ATOMIC_ACKNOWLEDGE, - 0, psn, syndrome); + skb = prepare_ack_packet(qp, &ack_pkt, opcode, 0, psn, syndrome); if (!skb) { err = -ENOMEM; goto out; @@ -1073,7 +1165,7 @@ static int send_atomic_ack(struct rxe_qp *qp, u8 syndrome, u32 psn) static enum resp_states acknowledge(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { - if (qp_type(qp) != IB_QPT_RC) + if (qp_type(qp) != IB_QPT_RC && qp_type(qp) != IB_QPT_XRC_TGT) return RESPST_CLEANUP; if (qp->resp.aeth_syndrome != AETH_ACK_UNLIMITED) @@ -1094,6 +1186,8 @@ static enum resp_states cleanup(struct rxe_qp *qp, if (pkt) { skb = skb_dequeue(&qp->req_pkts); rxe_put(qp); + if (pkt->srq) + rxe_put(pkt->srq); kfree_skb(skb); ib_device_put(qp->ibqp.device); } @@ -1359,7 +1453,8 @@ int rxe_responder(void *arg) state = do_class_d1e_error(qp); break; case RESPST_ERR_RNR: - if (qp_type(qp) == IB_QPT_RC) { + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_TGT) { rxe_counter_inc(rxe, RXE_CNT_SND_RNR); /* RC - class B */ send_ack(qp, AETH_RNR_NAK | @@ -1374,7 +1469,8 @@ int rxe_responder(void *arg) break; case RESPST_ERR_RKEY_VIOLATION: - if (qp_type(qp) == IB_QPT_RC) { + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_TGT) { /* Class C */ do_class_ac_error(qp, AETH_NAK_REM_ACC_ERR, IB_WC_REM_ACCESS_ERR); @@ -1400,7 +1496,8 @@ int rxe_responder(void *arg) break; case RESPST_ERR_LENGTH: - if (qp_type(qp) == IB_QPT_RC) { + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_TGT) { /* Class C */ do_class_ac_error(qp, AETH_NAK_INVALID_REQ, IB_WC_REM_INV_REQ_ERR);