From patchwork Thu Jul 27 20:01:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13330658 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1F0BEB64DD for ; Thu, 27 Jul 2023 20:02:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232548AbjG0UCR (ORCPT ); Thu, 27 Jul 2023 16:02:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232356AbjG0UCN (ORCPT ); Thu, 27 Jul 2023 16:02:13 -0400 Received: from mail-ot1-x335.google.com (mail-ot1-x335.google.com [IPv6:2607:f8b0:4864:20::335]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5442B30CD for ; Thu, 27 Jul 2023 13:02:11 -0700 (PDT) Received: by mail-ot1-x335.google.com with SMTP id 46e09a7af769-6b9c5362a51so1141888a34.0 for ; Thu, 27 Jul 2023 13:02:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690488130; x=1691092930; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QLwhx07cp7BivkdG1IaQVPoPxq40QRFuMnEXrfnT39Y=; b=jtsGsGHTeNcGc92MxLnOflgMyMssBbDeFOKAUoAG0oCe66bk9fQNDUOR7DEIkmH0CO lSHUvhE2jlG3djDTUengSaMy6Vov67VBM+me4QjVls79ABsrKgBKbySlto6HLPqgSFcP UjytHyXxhQUVAb2JLeudGnHgGvnIGSUc/8wvDLqBAneK2GrbECGQp3GDV0RZncMSOBCB GGwV1OwXvQ3PuYFL70SxDeR7gb3xClvUOW0kmexpEz2LoiEG+mQlALC/xlBPT4nu4oBa 9LSzrxAtwFDaVkO6NwTXYGCJ8REmPnz1uEBxdt+U/q5W6bV7mvQJO1XDfHyx/p/DsSDH zpGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690488130; x=1691092930; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QLwhx07cp7BivkdG1IaQVPoPxq40QRFuMnEXrfnT39Y=; b=mE4LPsX+cDJp1cVtrp6WlyKxux6WdDxYDv8BzJp5KgXpVnnLjg2r8JNqGqE2MLO/Xr DaK9t6yJR1ukcu/GDmv2OJgRifk6UEskinnd51YCMsRRWg3dABXGB6islY/M13Qd60I2 tx/JH0Se777EZdL12UE4My2eIw5e1L/NLa3biDDHbfBB/aysullnuN8nOYE9vBcYTebq B5A3zFid7+wYpwStF3FbvFV3RyUB7AD7TUupYMMBEDf2suxjNDwDar5DpqSwtKCeJ/PC AoHU0Pe1j2jOnsd20Y8zi74egtfKLA8xEGPR5RS7zvRRCK95gbaF0ze+zhX1hKzNQQr2 P8IA== X-Gm-Message-State: ABy/qLaDepBzwE3ooCWmDNFxFFGbbBYOZCMsPTUyrVGahqP+TdH7fpWx t4rAV1NaiqB6GFDQT8XTIOg= X-Google-Smtp-Source: APBJJlGwGA7gDr19VfzsZhd92FOhqUrCJdA6EQhYT9hNmK/i+wjl908J54hcre6UvkNEz11HiAzrKA== X-Received: by 2002:a05:6830:613:b0:6af:6f82:1e27 with SMTP id w19-20020a056830061300b006af6f821e27mr191273oti.3.1690488130617; Thu, 27 Jul 2023 13:02:10 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-a360-d7ee-0b00-a1d3.res6.spectrum.com. [2603:8081:140c:1a00:a360:d7ee:b00:a1d3]) by smtp.gmail.com with ESMTPSA id m3-20020a9d73c3000000b006b9acf5ebc0sm938142otk.76.2023.07.27.13.02.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 13:02:10 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org, jhack@hpe.com Cc: Bob Pearson Subject: [PATCH for-next v3 01/10] RDMA/rxe: Add sg fragment ops Date: Thu, 27 Jul 2023 15:01:20 -0500 Message-Id: <20230727200128.65947-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727200128.65947-1-rpearsonhpe@gmail.com> References: <20230727200128.65947-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Rename rxe_mr_copy_dir to rxe_mr_copy_op. This allows adding new fragment operations later. This is in preparation for supporting fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 4 ++-- drivers/infiniband/sw/rxe/rxe_loc.h | 4 ++-- drivers/infiniband/sw/rxe/rxe_mr.c | 22 +++++++++++----------- drivers/infiniband/sw/rxe/rxe_req.c | 2 +- drivers/infiniband/sw/rxe/rxe_resp.c | 6 +++--- drivers/infiniband/sw/rxe/rxe_verbs.h | 6 +++--- 6 files changed, 22 insertions(+), 22 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index 5111735aafae..e3f8dfc9b8bf 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -368,7 +368,7 @@ static inline enum comp_state do_read(struct rxe_qp *qp, ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &wqe->dma, payload_addr(pkt), - payload_size(pkt), RXE_TO_MR_OBJ); + payload_size(pkt), RXE_COPY_TO_MR); if (ret) { wqe->status = IB_WC_LOC_PROT_ERR; return COMPST_ERROR; @@ -390,7 +390,7 @@ static inline enum comp_state do_atomic(struct rxe_qp *qp, ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &wqe->dma, &atomic_orig, - sizeof(u64), RXE_TO_MR_OBJ); + sizeof(u64), RXE_COPY_TO_MR); if (ret) { wqe->status = IB_WC_LOC_PROT_ERR; return COMPST_ERROR; diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index cf38f4dcff78..532026cdd49e 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -64,9 +64,9 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr); int rxe_flush_pmem_iova(struct rxe_mr *mr, u64 iova, unsigned int length); int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, - unsigned int length, enum rxe_mr_copy_dir dir); + unsigned int length, enum rxe_mr_copy_op op); int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, - void *addr, int length, enum rxe_mr_copy_dir dir); + void *addr, int length, enum rxe_mr_copy_op op); int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset); int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode, diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index f54042e9aeb2..812c85cad463 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -243,7 +243,7 @@ int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sgl, } static int rxe_mr_copy_xarray(struct rxe_mr *mr, u64 iova, void *addr, - unsigned int length, enum rxe_mr_copy_dir dir) + unsigned int length, enum rxe_mr_copy_op op) { unsigned int page_offset = rxe_mr_iova_to_page_offset(mr, iova); unsigned long index = rxe_mr_iova_to_index(mr, iova); @@ -259,7 +259,7 @@ static int rxe_mr_copy_xarray(struct rxe_mr *mr, u64 iova, void *addr, bytes = min_t(unsigned int, length, mr_page_size(mr) - page_offset); va = kmap_local_page(page); - if (dir == RXE_FROM_MR_OBJ) + if (op == RXE_COPY_FROM_MR) memcpy(addr, va + page_offset, bytes); else memcpy(va + page_offset, addr, bytes); @@ -275,7 +275,7 @@ static int rxe_mr_copy_xarray(struct rxe_mr *mr, u64 iova, void *addr, } static void rxe_mr_copy_dma(struct rxe_mr *mr, u64 dma_addr, void *addr, - unsigned int length, enum rxe_mr_copy_dir dir) + unsigned int length, enum rxe_mr_copy_op op) { unsigned int page_offset = dma_addr & (PAGE_SIZE - 1); unsigned int bytes; @@ -288,10 +288,10 @@ static void rxe_mr_copy_dma(struct rxe_mr *mr, u64 dma_addr, void *addr, PAGE_SIZE - page_offset); va = kmap_local_page(page); - if (dir == RXE_TO_MR_OBJ) - memcpy(va + page_offset, addr, bytes); - else + if (op == RXE_COPY_FROM_MR) memcpy(addr, va + page_offset, bytes); + else + memcpy(va + page_offset, addr, bytes); kunmap_local(va); page_offset = 0; @@ -302,7 +302,7 @@ static void rxe_mr_copy_dma(struct rxe_mr *mr, u64 dma_addr, void *addr, } int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, - unsigned int length, enum rxe_mr_copy_dir dir) + unsigned int length, enum rxe_mr_copy_op op) { int err; @@ -313,7 +313,7 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, return -EINVAL; if (mr->ibmr.type == IB_MR_TYPE_DMA) { - rxe_mr_copy_dma(mr, iova, addr, length, dir); + rxe_mr_copy_dma(mr, iova, addr, length, op); return 0; } @@ -323,7 +323,7 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, return err; } - return rxe_mr_copy_xarray(mr, iova, addr, length, dir); + return rxe_mr_copy_xarray(mr, iova, addr, length, op); } /* copy data in or out of a wqe, i.e. sg list @@ -335,7 +335,7 @@ int copy_data( struct rxe_dma_info *dma, void *addr, int length, - enum rxe_mr_copy_dir dir) + enum rxe_mr_copy_op op) { int bytes; struct rxe_sge *sge = &dma->sge[dma->cur_sge]; @@ -395,7 +395,7 @@ int copy_data( if (bytes > 0) { iova = sge->addr + offset; - err = rxe_mr_copy(mr, iova, addr, bytes, dir); + err = rxe_mr_copy(mr, iova, addr, bytes, op); if (err) goto err2; diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 51b781ac2844..f3653234cf32 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -327,7 +327,7 @@ static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe, wqe->dma.sge_offset += payload; } else { err = copy_data(qp->pd, 0, &wqe->dma, payload_addr(pkt), - payload, RXE_FROM_MR_OBJ); + payload, RXE_COPY_FROM_MR); } return err; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 8a25c56dfd86..596615c515ad 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -565,7 +565,7 @@ static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, int err; err = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, - data_addr, data_len, RXE_TO_MR_OBJ); + data_addr, data_len, RXE_COPY_TO_MR); if (unlikely(err)) return (err == -ENOSPC) ? RESPST_ERR_LENGTH : RESPST_ERR_MALFORMED_WQE; @@ -581,7 +581,7 @@ static enum resp_states write_data_in(struct rxe_qp *qp, int data_len = payload_size(pkt); err = rxe_mr_copy(qp->resp.mr, qp->resp.va + qp->resp.offset, - payload_addr(pkt), data_len, RXE_TO_MR_OBJ); + payload_addr(pkt), data_len, RXE_COPY_TO_MR); if (err) { rc = RESPST_ERR_RKEY_VIOLATION; goto out; @@ -928,7 +928,7 @@ static enum resp_states read_reply(struct rxe_qp *qp, } err = rxe_mr_copy(mr, res->read.va, payload_addr(&ack_pkt), - payload, RXE_FROM_MR_OBJ); + payload, RXE_COPY_FROM_MR); if (err) { kfree_skb(skb); state = RESPST_ERR_RKEY_VIOLATION; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index ccb9d19ffe8a..d9c44bd30da4 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -275,9 +275,9 @@ enum rxe_mr_state { RXE_MR_STATE_VALID, }; -enum rxe_mr_copy_dir { - RXE_TO_MR_OBJ, - RXE_FROM_MR_OBJ, +enum rxe_mr_copy_op { + RXE_COPY_TO_MR, + RXE_COPY_FROM_MR, }; enum rxe_mr_lookup_type { From patchwork Thu Jul 27 20:01:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13330659 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D07C1C00528 for ; Thu, 27 Jul 2023 20:02:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232537AbjG0UCT (ORCPT ); Thu, 27 Jul 2023 16:02:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232463AbjG0UCR (ORCPT ); Thu, 27 Jul 2023 16:02:17 -0400 Received: from mail-ot1-x333.google.com (mail-ot1-x333.google.com [IPv6:2607:f8b0:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2834A2D75 for ; Thu, 27 Jul 2023 13:02:13 -0700 (PDT) Received: by mail-ot1-x333.google.com with SMTP id 46e09a7af769-6b9cf1997c4so1144022a34.3 for ; Thu, 27 Jul 2023 13:02:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690488132; x=1691092932; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pRQ3tQaCHBe8nYmHcrq3Y3i1vEg/iOu1gwQSV5rGm3Y=; b=KaTYyRt325xN6/OiJ3M/UqiO9/ajOWAeIGSBloEKCBxJrvcN1jo9T4DvLVbMKCcSxI wx/KHn2FD9B+T7VPPt1lWubZM3/D62MVbXv12Ip19hJyzM4SeEzUf5BTDmcZqCPuhCOM yAUFr4vFdE/Ddx5OBK15osRGSKWZ5faOACbUi8nUa17OjzVFbU8lmVCfLxXvLpQi2kjq jWlviHlGYypLNYJXPxiZcSWAnQHFhrMuKy5+zqKrTXw9QIanA0lB+XDjLxNIYP3RzqIV lDg+aohbYf+pV3SLOuT8KB70Xdx6qMKIKLi9wyMwctDXRlIjsj895Cc/8bI0JHfmjwI2 Tvlw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690488132; x=1691092932; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pRQ3tQaCHBe8nYmHcrq3Y3i1vEg/iOu1gwQSV5rGm3Y=; b=A4ZBf25YnsWGfjxNIu0vu7uw2dWwpoQAh/Az2J/I3Qb2ezi59GKSUFXn+bBRlNBTw5 uDVr6TFZM3IqLVBGCWx6RTiqFFcbYPGMXDA8fG3dwMPIi3bBv/nMeSF7gCfezjEEBLr0 4s1QLR0LcjTXW95nbSrEXHDUBJ5EMEIC8MbDz4id7ExrVUKlwvMSyTlNXIi5ZJ6RPSTh oMXiCn7VrRIK4Jig/pnM/pUVUKSm/DEA9mTgWC+iifbTV3i5O55IY75xGnxYYW3eTBha vc9blUozHh/fuuMqm81zmJREyl2AmeSSd8KBQN8yfbl8oHtUovhYyMDfDsS3M0WYI3CO VFXA== X-Gm-Message-State: ABy/qLZOAcZXkQ1pET5tQPc77Q7dIl8/mNTqmAAjD6cDqacVktI7Ny1q +/hTYqAYfu9eZyXmfDYJcsjGpOVHvjs= X-Google-Smtp-Source: APBJJlE0NCwNJQdtL6HvL+TIaMgw3NPYH8K/j5xsdrZrwo/gcpQ+xQDevq/pRS21LlR34gdAMaIloA== X-Received: by 2002:a9d:77d2:0:b0:6b2:ac44:bf8e with SMTP id w18-20020a9d77d2000000b006b2ac44bf8emr199094otl.8.1690488132376; Thu, 27 Jul 2023 13:02:12 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-a360-d7ee-0b00-a1d3.res6.spectrum.com. [2603:8081:140c:1a00:a360:d7ee:b00:a1d3]) by smtp.gmail.com with ESMTPSA id m3-20020a9d73c3000000b006b9acf5ebc0sm938142otk.76.2023.07.27.13.02.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 13:02:11 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org, jhack@hpe.com Cc: Bob Pearson Subject: [PATCH for-next v3 02/10] RDMA/rxe: Extend rxe_mr_copy to support skb frags Date: Thu, 27 Jul 2023 15:01:21 -0500 Message-Id: <20230727200128.65947-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727200128.65947-1-rpearsonhpe@gmail.com> References: <20230727200128.65947-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org rxe_mr_copy() currently supports copying between an mr and a contiguous region of kernel memory. Rename rxe_mr_copy() to rxe_copy_mr_data(). Extend the operations to support copying between an mr and an skb fragment list. Fixup calls to rxe_mr_copy() to support the new API. Add two APIs rxe_add_frag() and rxe_num_mr_frags() to add a fragment to and skb and count the total number of fragments needed. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 10 +- drivers/infiniband/sw/rxe/rxe_mr.c | 170 +++++++++++++++++++++++--- drivers/infiniband/sw/rxe/rxe_resp.c | 14 ++- drivers/infiniband/sw/rxe/rxe_verbs.h | 2 + 4 files changed, 173 insertions(+), 23 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 532026cdd49e..77661e0ccab7 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -62,11 +62,15 @@ void rxe_mr_init_dma(int access, struct rxe_mr *mr); int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, int access, struct rxe_mr *mr); int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr); -int rxe_flush_pmem_iova(struct rxe_mr *mr, u64 iova, unsigned int length); -int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, - unsigned int length, enum rxe_mr_copy_op op); +int rxe_add_frag(struct sk_buff *skb, struct rxe_mr *mr, struct page *page, + unsigned int length, unsigned int offset); +int rxe_num_mr_frags(struct rxe_mr *mr, u64 iova, unsigned int length); +int rxe_copy_mr_data(struct sk_buff *skb, struct rxe_mr *mr, u64 iova, + void *addr, unsigned int skb_offset, + unsigned int length, enum rxe_mr_copy_op op); int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, void *addr, int length, enum rxe_mr_copy_op op); +int rxe_flush_pmem_iova(struct rxe_mr *mr, u64 iova, unsigned int length); int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset); int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode, diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 812c85cad463..2667e8129a1d 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -242,7 +242,79 @@ int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sgl, return ib_sg_to_pages(ibmr, sgl, sg_nents, sg_offset, rxe_set_page); } -static int rxe_mr_copy_xarray(struct rxe_mr *mr, u64 iova, void *addr, +/** + * rxe_add_frag() - Add a frag to a nonlinear packet + * @skb: The packet buffer + * @page: The page + * @mr: The memory region + * @length: Length of fragment + * @offset: Offset of fragment in page + * + * Caller must verify that the fragment is contained in the page. + * Caller should verify that the number of fragments does not + * exceed MAX_SKB_FRAGS + * + * Returns: 0 on success else a negative errno + */ +int rxe_add_frag(struct sk_buff *skb, struct rxe_mr *mr, struct page *page, + unsigned int length, unsigned int offset) +{ + int nr_frags = skb_shinfo(skb)->nr_frags; + skb_frag_t *frag = &skb_shinfo(skb)->frags[nr_frags]; + + if (nr_frags >= MAX_SKB_FRAGS) { + rxe_dbg_mr(mr, "ran out of frags"); + return -EINVAL; + } + + frag->bv_len = length; + frag->bv_offset = offset; + frag->bv_page = page; + /* because kfree_skb will call put_page() */ + get_page(page); + skb_shinfo(skb)->nr_frags++; + + skb->data_len += length; + skb->len += length; + + return 0; +} + +/** + * rxe_num_mr_frags() - Compute the number of skb frags needed to copy + * length bytes from an mr to an skb frag list. + * @mr: mr to copy data from + * @iova: iova in memory region as starting point + * @length: number of bytes to transfer + * + * Returns: the number of frags needed + */ +int rxe_num_mr_frags(struct rxe_mr *mr, u64 iova, unsigned int length) +{ + unsigned int page_size; + unsigned int page_offset; + unsigned int bytes; + int num_frags = 0; + + if (mr->ibmr.type == IB_MR_TYPE_DMA) + page_size = PAGE_SIZE; + else + page_size = mr_page_size(mr); + page_offset = iova & (page_size - 1); + + while (length) { + bytes = min_t(unsigned int, length, + page_size - page_offset); + length -= bytes; + page_offset = 0; + num_frags++; + } + + return num_frags; +} + +static int rxe_mr_copy_xarray(struct sk_buff *skb, struct rxe_mr *mr, + u64 iova, void *addr, unsigned int skb_offset, unsigned int length, enum rxe_mr_copy_op op) { unsigned int page_offset = rxe_mr_iova_to_page_offset(mr, iova); @@ -250,6 +322,7 @@ static int rxe_mr_copy_xarray(struct rxe_mr *mr, u64 iova, void *addr, unsigned int bytes; struct page *page; void *va; + int err = 0; while (length) { page = xa_load(&mr->page_list, index); @@ -258,12 +331,32 @@ static int rxe_mr_copy_xarray(struct rxe_mr *mr, u64 iova, void *addr, bytes = min_t(unsigned int, length, mr_page_size(mr) - page_offset); - va = kmap_local_page(page); - if (op == RXE_COPY_FROM_MR) + switch (op) { + case RXE_COPY_FROM_MR: + va = kmap_local_page(page); memcpy(addr, va + page_offset, bytes); - else + kunmap_local(va); + break; + case RXE_COPY_TO_MR: + va = kmap_local_page(page); memcpy(va + page_offset, addr, bytes); - kunmap_local(va); + kunmap_local(va); + break; + case RXE_FRAG_TO_MR: + va = kmap_local_page(page); + err = skb_copy_bits(skb, skb_offset, + va + page_offset, bytes); + kunmap_local(va); + skb_offset += bytes; + break; + case RXE_FRAG_FROM_MR: + err = rxe_add_frag(skb, mr, page, bytes, + page_offset); + break; + } + + if (err) + return err; page_offset = 0; addr += bytes; @@ -274,13 +367,15 @@ static int rxe_mr_copy_xarray(struct rxe_mr *mr, u64 iova, void *addr, return 0; } -static void rxe_mr_copy_dma(struct rxe_mr *mr, u64 dma_addr, void *addr, - unsigned int length, enum rxe_mr_copy_op op) +static int rxe_mr_copy_dma(struct sk_buff *skb, struct rxe_mr *mr, + u64 dma_addr, void *addr, unsigned int skb_offset, + unsigned int length, enum rxe_mr_copy_op op) { unsigned int page_offset = dma_addr & (PAGE_SIZE - 1); unsigned int bytes; struct page *page; u8 *va; + int err = 0; while (length) { page = ib_virt_dma_to_page(dma_addr); @@ -288,10 +383,32 @@ static void rxe_mr_copy_dma(struct rxe_mr *mr, u64 dma_addr, void *addr, PAGE_SIZE - page_offset); va = kmap_local_page(page); - if (op == RXE_COPY_FROM_MR) + switch (op) { + case RXE_COPY_FROM_MR: + va = kmap_local_page(page); memcpy(addr, va + page_offset, bytes); - else + kunmap_local(va); + break; + case RXE_COPY_TO_MR: + va = kmap_local_page(page); memcpy(va + page_offset, addr, bytes); + kunmap_local(va); + break; + case RXE_FRAG_TO_MR: + va = kmap_local_page(page); + err = skb_copy_bits(skb, skb_offset, + va + page_offset, bytes); + kunmap_local(va); + skb_offset += bytes; + break; + case RXE_FRAG_FROM_MR: + err = rxe_add_frag(skb, mr, page, bytes, + page_offset); + break; + } + + if (err) + return err; kunmap_local(va); page_offset = 0; @@ -299,10 +416,31 @@ static void rxe_mr_copy_dma(struct rxe_mr *mr, u64 dma_addr, void *addr, addr += bytes; length -= bytes; } + + return 0; } -int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, - unsigned int length, enum rxe_mr_copy_op op) +/** + * rxe_copy_mr_data() - transfer data between an MR and a packet + * @skb: the packet buffer + * @mr: the MR + * @iova: the address in the MR + * @addr: the address in the packet (TO/FROM MR only) + * @length: the length to transfer + * @op: copy operation (TO MR, FROM MR or FRAG MR) + * + * Copy data from a range (addr, addr+length-1) in a packet + * to or from a range in an MR object at (iova, iova+length-1). + * Or, build a frag list referencing the MR range. + * + * Caller must verify that the access permissions support the + * operation. + * + * Returns: 0 on success or an error + */ +int rxe_copy_mr_data(struct sk_buff *skb, struct rxe_mr *mr, u64 iova, + void *addr, unsigned int skb_offset, + unsigned int length, enum rxe_mr_copy_op op) { int err; @@ -313,8 +451,8 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, return -EINVAL; if (mr->ibmr.type == IB_MR_TYPE_DMA) { - rxe_mr_copy_dma(mr, iova, addr, length, op); - return 0; + return rxe_mr_copy_dma(skb, mr, iova, addr, skb_offset, + length, op); } err = mr_check_range(mr, iova, length); @@ -323,7 +461,8 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, return err; } - return rxe_mr_copy_xarray(mr, iova, addr, length, op); + return rxe_mr_copy_xarray(skb, mr, iova, addr, skb_offset, + length, op); } /* copy data in or out of a wqe, i.e. sg list @@ -395,7 +534,8 @@ int copy_data( if (bytes > 0) { iova = sge->addr + offset; - err = rxe_mr_copy(mr, iova, addr, bytes, op); + err = rxe_copy_mr_data(NULL, mr, iova, addr, + 0, bytes, op); if (err) goto err2; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 596615c515ad..87d61a462ff5 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -576,12 +576,15 @@ static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, static enum resp_states write_data_in(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { + struct sk_buff *skb = PKT_TO_SKB(pkt); enum resp_states rc = RESPST_NONE; - int err; int data_len = payload_size(pkt); + int err; + int skb_offset = 0; - err = rxe_mr_copy(qp->resp.mr, qp->resp.va + qp->resp.offset, - payload_addr(pkt), data_len, RXE_COPY_TO_MR); + err = rxe_copy_mr_data(skb, qp->resp.mr, qp->resp.va + qp->resp.offset, + payload_addr(pkt), skb_offset, data_len, + RXE_COPY_TO_MR); if (err) { rc = RESPST_ERR_RKEY_VIOLATION; goto out; @@ -876,6 +879,7 @@ static enum resp_states read_reply(struct rxe_qp *qp, int err; struct resp_res *res = qp->resp.res; struct rxe_mr *mr; + unsigned int skb_offset = 0; u8 *pad_addr; if (!res) { @@ -927,8 +931,8 @@ static enum resp_states read_reply(struct rxe_qp *qp, goto err_out; } - err = rxe_mr_copy(mr, res->read.va, payload_addr(&ack_pkt), - payload, RXE_COPY_FROM_MR); + err = rxe_copy_mr_data(skb, mr, res->read.va, payload_addr(&ack_pkt), + skb_offset, payload, RXE_COPY_FROM_MR); if (err) { kfree_skb(skb); state = RESPST_ERR_RKEY_VIOLATION; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index d9c44bd30da4..89cf50b938c2 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -278,6 +278,8 @@ enum rxe_mr_state { enum rxe_mr_copy_op { RXE_COPY_TO_MR, RXE_COPY_FROM_MR, + RXE_FRAG_TO_MR, + RXE_FRAG_FROM_MR, }; enum rxe_mr_lookup_type { From patchwork Thu Jul 27 20:01:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13330661 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0210CC0015E for ; Thu, 27 Jul 2023 20:02:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232380AbjG0UCT (ORCPT ); Thu, 27 Jul 2023 16:02:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232541AbjG0UCR (ORCPT ); Thu, 27 Jul 2023 16:02:17 -0400 Received: from mail-ot1-x333.google.com (mail-ot1-x333.google.com [IPv6:2607:f8b0:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A8D230D4 for ; Thu, 27 Jul 2023 13:02:14 -0700 (PDT) Received: by mail-ot1-x333.google.com with SMTP id 46e09a7af769-6b9f46dc2e3so994740a34.0 for ; Thu, 27 Jul 2023 13:02:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690488134; x=1691092934; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=W0gaMN61MfF1nKHyiXLH9h75hP0spU3KUHPxxTMSBFE=; b=VS0uPQzm0FF43tCrAi14fyDrUfUhYfZ/9bF5VJVIqTgTeRiaHzSXhxbHCKEDVzlww5 VLTEsFK20pJmVd3SdHLVLznxcPayKbXF+9K8Jqao6EGnM5OC/b23nWUs3Peb35yX07Bc Gay+FTaTQZ4LRrUonVEgAXR55Ly6fVseo00GVaHtH/TLFrkBJtCzLap8zly5DVG1SMS8 5B4NCNiPZcI9/wiDfLDsuq8aPbOp+eQLOyJdkBUxKmewlRgx5j6eoFMk3+9PRhq9PuTM Vuref6YXWUkGmG+gwknQ1UgP6gKpeE++5j4xKsAxVlIdD0lDh3P36aBYTc35eqpG8Z9O I9ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690488134; x=1691092934; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=W0gaMN61MfF1nKHyiXLH9h75hP0spU3KUHPxxTMSBFE=; b=RAPTlEbsa4WjMrGMG/cTeXc4m+GZRt3ZNqgekqsZ/VfpID1K/DdDoMCaGLU07uR8aT eBQ+Syt6fSN10k6zCqWnBluDO2wTGA9nVqS8VZzhwP+gRpSYPPyifeu2ovdaGhJunVk0 f2iAOmfN+FnU43KN46HPh7iI+ISz3bGGFl5/TT2sfhJ40JNg5rMA0Lo2Ba41RuxY83SR GqpRvVAaGESnASKftwCaAmCyB5e9T/i9krbMnwmW/YuaVtcQhOd8vhSh14F4dY1iTgzq OTjCd+JpefPmFkfdU7uwL9k1RcI1PDVZzkK+HviCWuo/p76dA9J5tJKtj0GEsdlUeoD8 tV2Q== X-Gm-Message-State: ABy/qLbq8Un0yh6TXJ5pd2tzDnYS5X6zut/5KiCbA+iq4gYdI1wfXTps IhyaRp9OO3JE5S41LKH1qEQ= X-Google-Smtp-Source: APBJJlGX1l4bCpt5mW5DCdhMNHN2O5i4kxC3/P3DZJyLjSdtF3kqkBwfg+6S7xpmu3/T7fMt95Z2Zg== X-Received: by 2002:a9d:7ac1:0:b0:6b9:c7de:68e0 with SMTP id m1-20020a9d7ac1000000b006b9c7de68e0mr168444otn.29.1690488133863; Thu, 27 Jul 2023 13:02:13 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-a360-d7ee-0b00-a1d3.res6.spectrum.com. [2603:8081:140c:1a00:a360:d7ee:b00:a1d3]) by smtp.gmail.com with ESMTPSA id m3-20020a9d73c3000000b006b9acf5ebc0sm938142otk.76.2023.07.27.13.02.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 13:02:12 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org, jhack@hpe.com Cc: Bob Pearson Subject: [PATCH for-next v3 03/10] RDMA/rxe: Extend copy_data to support skb frags Date: Thu, 27 Jul 2023 15:01:22 -0500 Message-Id: <20230727200128.65947-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727200128.65947-1-rpearsonhpe@gmail.com> References: <20230727200128.65947-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org copy_data() currently supports copying between an mr and the scatter-gather list of a wqe. Rename copy_data() to rxe_copy_dma_data(). Extend the operations to support copying between a sg list and an skb fragment list. Fixup calls to copy_data() to support the new API. Add a routine to count the number of skbs required for rxe_copy_dma_data(). Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 17 ++- drivers/infiniband/sw/rxe/rxe_loc.h | 10 +- drivers/infiniband/sw/rxe/rxe_mr.c | 175 +++++++++++++++++---------- drivers/infiniband/sw/rxe/rxe_req.c | 11 +- drivers/infiniband/sw/rxe/rxe_resp.c | 7 +- 5 files changed, 139 insertions(+), 81 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index e3f8dfc9b8bf..670ee08f6f5a 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -364,11 +364,14 @@ static inline enum comp_state do_read(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct rxe_send_wqe *wqe) { + struct sk_buff *skb = PKT_TO_SKB(pkt); + int skb_offset = 0; int ret; - ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, - &wqe->dma, payload_addr(pkt), - payload_size(pkt), RXE_COPY_TO_MR); + ret = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, + &wqe->dma, payload_addr(pkt), + skb_offset, payload_size(pkt), + RXE_COPY_TO_MR); if (ret) { wqe->status = IB_WC_LOC_PROT_ERR; return COMPST_ERROR; @@ -384,13 +387,15 @@ static inline enum comp_state do_atomic(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct rxe_send_wqe *wqe) { + struct sk_buff *skb = NULL; + int skb_offset = 0; int ret; u64 atomic_orig = atmack_orig(pkt); - ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, - &wqe->dma, &atomic_orig, - sizeof(u64), RXE_COPY_TO_MR); + ret = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, + &wqe->dma, &atomic_orig, + skb_offset, sizeof(u64), RXE_COPY_TO_MR); if (ret) { wqe->status = IB_WC_LOC_PROT_ERR; return COMPST_ERROR; diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 77661e0ccab7..fad853199b4d 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -68,15 +68,19 @@ int rxe_num_mr_frags(struct rxe_mr *mr, u64 iova, unsigned int length); int rxe_copy_mr_data(struct sk_buff *skb, struct rxe_mr *mr, u64 iova, void *addr, unsigned int skb_offset, unsigned int length, enum rxe_mr_copy_op op); -int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, - void *addr, int length, enum rxe_mr_copy_op op); +int rxe_num_dma_frags(const struct rxe_pd *pd, const struct rxe_dma_info *dma, + unsigned int length); +int rxe_copy_dma_data(struct sk_buff *skb, struct rxe_pd *pd, int access, + struct rxe_dma_info *dma, void *addr, + unsigned int skb_offset, unsigned int length, + enum rxe_mr_copy_op op); int rxe_flush_pmem_iova(struct rxe_mr *mr, u64 iova, unsigned int length); int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset); int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode, u64 compare, u64 swap_add, u64 *orig_val); int rxe_mr_do_atomic_write(struct rxe_mr *mr, u64 iova, u64 value); -struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, +struct rxe_mr *lookup_mr(const struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type); int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length); int advance_dma_data(struct rxe_dma_info *dma, unsigned int length); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 2667e8129a1d..0ac71238599a 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -313,6 +313,63 @@ int rxe_num_mr_frags(struct rxe_mr *mr, u64 iova, unsigned int length) return num_frags; } +/** + * rxe_num_dma_frags() - Count the number of skb frags needed to copy + * length bytes from a dma info struct to an skb + * @pd: protection domain used by dma entries + * @dma: dma info + * @length: number of bytes to copy + * + * Returns: number of frags needed + */ +int rxe_num_dma_frags(const struct rxe_pd *pd, const struct rxe_dma_info *dma, + unsigned int length) +{ + unsigned int cur_sge = dma->cur_sge; + const struct rxe_sge *sge = &dma->sge[cur_sge]; + unsigned int offset = dma->sge_offset; + struct rxe_mr *mr = NULL; + unsigned int bytes; + u64 iova; + int num_frags = 0; + + if (WARN_ON(length > dma->resid)) + return 0; + + while (length) { + if (offset >= sge->length) { + if (mr) + rxe_put(mr); + sge++; + cur_sge++; + offset = 0; + + if (WARN_ON(cur_sge >= dma->num_sge)) + return 0; + if (!sge->length) + continue; + } + + mr = lookup_mr(pd, 0, sge->lkey, RXE_LOOKUP_LOCAL); + if (WARN_ON(!mr)) + return 0; + + bytes = min_t(unsigned int, length, + sge->length - offset); + if (bytes) { + iova = sge->addr + offset; + num_frags += rxe_num_mr_frags(mr, iova, length); + offset += bytes; + length -= bytes; + } + } + + if (mr) + rxe_put(mr); + + return num_frags; +} + static int rxe_mr_copy_xarray(struct sk_buff *skb, struct rxe_mr *mr, u64 iova, void *addr, unsigned int skb_offset, unsigned int length, enum rxe_mr_copy_op op) @@ -465,99 +522,85 @@ int rxe_copy_mr_data(struct sk_buff *skb, struct rxe_mr *mr, u64 iova, length, op); } -/* copy data in or out of a wqe, i.e. sg list - * under the control of a dma descriptor +/** + * rxe_copy_dma_data() - transfer data between a packet and a wqe + * @skb: packet buffer (FRAG MR only) + * @pd: PD which MRs must match + * @access: access permission for MRs in sge (TO MR only) + * @dma: dma info from a wqe + * @addr: payload address in packet (TO/FROM MR only) + * @skb_offset: offset of data in skb (RXE_FRAG_TO_MR only) + * @length: payload length + * @op: copy operation (RXE_COPY_TO/FROM_MR or RXE_FRAG_TO/FROM_MR) + * + * Iterate over scatter/gather list in dma info starting from the + * current location until the payload length is used up and for each + * entry copy or build a frag list referencing the MR obtained from + * the lkey in the sge. This routine is called once for each packet + * sent or received to/from the wqe. + * + * Returns: 0 on success or an error */ -int copy_data( - struct rxe_pd *pd, - int access, - struct rxe_dma_info *dma, - void *addr, - int length, - enum rxe_mr_copy_op op) +int rxe_copy_dma_data(struct sk_buff *skb, struct rxe_pd *pd, int access, + struct rxe_dma_info *dma, void *addr, + unsigned int skb_offset, unsigned int length, + enum rxe_mr_copy_op op) { - int bytes; - struct rxe_sge *sge = &dma->sge[dma->cur_sge]; - int offset = dma->sge_offset; - int resid = dma->resid; - struct rxe_mr *mr = NULL; - u64 iova; - int err; + struct rxe_sge *sge = &dma->sge[dma->cur_sge]; + unsigned int offset = dma->sge_offset; + unsigned int resid = dma->resid; + struct rxe_mr *mr = NULL; + unsigned int bytes; + u64 iova; + int err = 0; if (length == 0) return 0; - if (length > resid) { - err = -EINVAL; - goto err2; - } - - if (sge->length && (offset < sge->length)) { - mr = lookup_mr(pd, access, sge->lkey, RXE_LOOKUP_LOCAL); - if (!mr) { - err = -EINVAL; - goto err1; - } - } - - while (length > 0) { - bytes = length; + if (length > resid) + return -EINVAL; + while (length) { if (offset >= sge->length) { - if (mr) { + if (mr) rxe_put(mr); - mr = NULL; - } + sge++; dma->cur_sge++; offset = 0; - if (dma->cur_sge >= dma->num_sge) { - err = -ENOSPC; - goto err2; - } - - if (sge->length) { - mr = lookup_mr(pd, access, sge->lkey, - RXE_LOOKUP_LOCAL); - if (!mr) { - err = -EINVAL; - goto err1; - } - } else { + if (dma->cur_sge >= dma->num_sge) + return -EINVAL; + if (!sge->length) continue; - } } - if (bytes > sge->length - offset) - bytes = sge->length - offset; + mr = lookup_mr(pd, access, sge->lkey, RXE_LOOKUP_LOCAL); + if (!mr) + return -EINVAL; + bytes = min_t(int, length, sge->length - offset); if (bytes > 0) { iova = sge->addr + offset; - err = rxe_copy_mr_data(NULL, mr, iova, addr, - 0, bytes, op); + err = rxe_copy_mr_data(skb, mr, iova, addr, + skb_offset, bytes, op); if (err) - goto err2; + goto err_put; - offset += bytes; - resid -= bytes; - length -= bytes; - addr += bytes; + addr += bytes; + offset += bytes; + skb_offset += bytes; + resid -= bytes; + length -= bytes; } } dma->sge_offset = offset; - dma->resid = resid; - - if (mr) - rxe_put(mr); - - return 0; + dma->resid = resid; -err2: +err_put: if (mr) rxe_put(mr); -err1: return err; } @@ -753,7 +796,7 @@ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length) return 0; } -struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, +struct rxe_mr *lookup_mr(const struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type) { struct rxe_mr *mr; diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index f3653234cf32..525e704c12c2 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -315,8 +315,10 @@ static void rxe_init_roce_hdrs(struct rxe_qp *qp, struct rxe_send_wqe *wqe, } static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe, - struct rxe_pkt_info *pkt, u32 payload) + struct rxe_pkt_info *pkt, u32 payload, + struct sk_buff *skb) { + int skb_offset = 0; void *data; int err = 0; @@ -326,8 +328,9 @@ static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe, wqe->dma.resid -= payload; wqe->dma.sge_offset += payload; } else { - err = copy_data(qp->pd, 0, &wqe->dma, payload_addr(pkt), - payload, RXE_COPY_FROM_MR); + err = rxe_copy_dma_data(skb, qp->pd, 0, &wqe->dma, + payload_addr(pkt), skb_offset, + payload, RXE_COPY_FROM_MR); } return err; @@ -379,7 +382,7 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, /* init payload if any */ if (pkt->mask & RXE_WRITE_OR_SEND_MASK) { - err = rxe_init_payload(qp, wqe, pkt, payload); + err = rxe_init_payload(qp, wqe, pkt, payload, skb); if (unlikely(err)) goto err_out; } else if (pkt->mask & RXE_FLUSH_MASK) { diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 87d61a462ff5..a6c1d67ad943 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -562,10 +562,13 @@ static enum resp_states check_rkey(struct rxe_qp *qp, static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, int data_len) { + struct sk_buff *skb = NULL; + int skb_offset = 0; int err; - err = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, - data_addr, data_len, RXE_COPY_TO_MR); + err = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, + &qp->resp.wqe->dma, data_addr, + skb_offset, data_len, RXE_COPY_TO_MR); if (unlikely(err)) return (err == -ENOSPC) ? RESPST_ERR_LENGTH : RESPST_ERR_MALFORMED_WQE; From patchwork Thu Jul 27 20:01:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13330660 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 960A6C41513 for ; Thu, 27 Jul 2023 20:02:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232613AbjG0UCU (ORCPT ); Thu, 27 Jul 2023 16:02:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40484 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232411AbjG0UCS (ORCPT ); Thu, 27 Jul 2023 16:02:18 -0400 Received: from mail-ot1-x329.google.com (mail-ot1-x329.google.com [IPv6:2607:f8b0:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 525EF30EA for ; Thu, 27 Jul 2023 13:02:16 -0700 (PDT) Received: by mail-ot1-x329.google.com with SMTP id 46e09a7af769-6bb0cadd372so1192355a34.0 for ; Thu, 27 Jul 2023 13:02:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690488135; x=1691092935; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5VUZAH4t9v/jJjvjtu/ipb6pZgkz7czIPNVSSXt6MCk=; b=ASJ+W8DOCv4XOCoSQOXF9ouDf8qeigMCpDB9xFgigt5yh1buIa2i9mQomLrtAjNrHh ukobirvHeU7cbWQwBSQ56rX2ksS3rJrm1aH2dQL3llBp6d92O5mWGh9Be1TR30Ajib4M AAFkfREqoJk2MgsyCIXLG6jxVdxBIHQzfReqQrxkMHZoSUcMhuiO4Iv3+ClzVVdhOQ5I 3+YnF0aAnJy4UeEe5uoVBt7FpqwCycByVRQNRHjjpsZFEEfJdq5HYDOr4zlFT+EkQD3q wecVl0WXQ7O+1L/XLXkprwtB2+Axq9+lQI1BYi8cEqRevjVyg7zNbUuIBh1QHArJlMeu h0YQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690488135; x=1691092935; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5VUZAH4t9v/jJjvjtu/ipb6pZgkz7czIPNVSSXt6MCk=; b=dZpQTfpnzXUHVJnaKSaIpbyNzIPsvM8veBCjNj93rFWHvO4jkRaFEwJZX7VVrYkX8c Dm0hVTzHLZZuXWe2JPSTlP/G8CyG2hHFrx+av8fTF/aiNlJKVX+378jD/8apMj89pjvx 4waFYztf52zDWryeaTNxnRGqann8zuaKnozCn2di6PxfhQtA2OrjYA1VPmihUi5aK3tW uoWcZug2ZFlTo8eRlBVSGu+P1cgN7abpRp94O3GyOic0dEyE0/R4W2CXLN1n/QKjeB2B wYU6YHFVzMoiFwjvVod8VaFF+l9jkLLDWAce5mwNnYiwHy+g9MnTvZFdmtldvNsYG6yn aqQA== X-Gm-Message-State: ABy/qLZc5XV2fp7q/idzREOK7o1M7DrFvmd1Em2itTWTVq+jPdY3/iVE 5D6cSW+nOCS0gdNeGKdP8tw= X-Google-Smtp-Source: APBJJlFUYX7Th9jeAuXY4Y6xC8O6NYCOKuMIDH79MoRSXzb4y+xny6OV7cNtraaiQYhAqWQZOmG4Kw== X-Received: by 2002:a05:6830:1481:b0:6bb:1036:46de with SMTP id s1-20020a056830148100b006bb103646demr166639otq.30.1690488135586; Thu, 27 Jul 2023 13:02:15 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-a360-d7ee-0b00-a1d3.res6.spectrum.com. [2603:8081:140c:1a00:a360:d7ee:b00:a1d3]) by smtp.gmail.com with ESMTPSA id m3-20020a9d73c3000000b006b9acf5ebc0sm938142otk.76.2023.07.27.13.02.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 13:02:15 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org, jhack@hpe.com Cc: Bob Pearson Subject: [PATCH for-next v3 04/10] RDMA/rxe: Extend rxe_init_packet() to support frags Date: Thu, 27 Jul 2023 15:01:23 -0500 Message-Id: <20230727200128.65947-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727200128.65947-1-rpearsonhpe@gmail.com> References: <20230727200128.65947-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add a subroutine rxe_can_use_sg() to determine if a packet is a candidate for a fragmented skb. Add a global variable rxe_use_sg to control whether to support nonlinear skbs. Modify rxe_init_packet() to test if the packet should use a fragmented skb. Fixup calls to rxe_init_packet() to use the new API but disable creating nonlinear skbs for now. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 5 +++ drivers/infiniband/sw/rxe/rxe.h | 3 ++ drivers/infiniband/sw/rxe/rxe_loc.h | 4 +- drivers/infiniband/sw/rxe/rxe_net.c | 58 ++++++++++++++++++++++++++-- drivers/infiniband/sw/rxe/rxe_req.c | 4 +- drivers/infiniband/sw/rxe/rxe_resp.c | 4 +- 6 files changed, 66 insertions(+), 12 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 6b55c595f8f8..800e8c0d437d 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -13,6 +13,11 @@ MODULE_AUTHOR("Bob Pearson, Frank Zago, John Groves, Kamal Heib"); MODULE_DESCRIPTION("Soft RDMA transport"); MODULE_LICENSE("Dual BSD/GPL"); +/* if true allow using fragmented skbs */ +bool rxe_use_sg; +module_param_named(use_sg, rxe_use_sg, bool, 0444); +MODULE_PARM_DESC(use_sg, "Support skb frags; default false"); + /* free resources for a rxe device all objects created for this device must * have been destroyed */ diff --git a/drivers/infiniband/sw/rxe/rxe.h b/drivers/infiniband/sw/rxe/rxe.h index 077e3ad8f39a..b334eda62c36 100644 --- a/drivers/infiniband/sw/rxe/rxe.h +++ b/drivers/infiniband/sw/rxe/rxe.h @@ -30,6 +30,9 @@ #include "rxe_verbs.h" #include "rxe_loc.h" +/* if true allow using fragmented skbs */ +extern bool rxe_use_sg; + /* * Version 1 and Version 2 are identical on 64 bit machines, but on 32 bit * machines Version 2 has a different struct layout. diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index fad853199b4d..96b1fb79610a 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -97,8 +97,8 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey); void rxe_mw_cleanup(struct rxe_pool_elem *elem); /* rxe_net.c */ -struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, - struct rxe_pkt_info *pkt); +struct sk_buff *rxe_init_packet(struct rxe_qp *qp, struct rxe_av *av, + struct rxe_pkt_info *pkt, bool *is_frag); int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt, struct sk_buff *skb); int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index 94e347a7f386..c44ef39010f1 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -510,12 +510,47 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, return err; } -struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, - struct rxe_pkt_info *pkt) +/** + * rxe_can_use_sg() - determine if packet is a candidate for fragmenting + * @rxe: the rxe device + * @pkt: packet info + * + * Limit to packets with: + * rxe_use_sg set + * ndev supports SG + * + * Returns: true if conditions are met else 0 + */ +static bool rxe_can_use_sg(struct rxe_qp *qp, struct rxe_pkt_info *pkt) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + + return (rxe_use_sg && (rxe->ndev->features & NETIF_F_SG)); +} + +/* must be big enough to hold MAC+IPV6+UDP+ROCE headers */ +#define RXE_MIN_SKB_SIZE (256) + +/** + * rxe_init_packet - allocate and initialize a new skb + * @qp: the queue pair + * @av: remote address vector + * @pkt: packet info + * @frag: optional return value for fragmented skb + * on call if frag == NULL do not use fragmented skb + * on return if not NULL set *frag to 1 + * if packet will be fragmented else 0 + * + * Returns: an skb on success else NULL + */ +struct sk_buff *rxe_init_packet(struct rxe_qp *qp, struct rxe_av *av, + struct rxe_pkt_info *pkt, bool *frag) { + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); unsigned int hdr_len; struct sk_buff *skb = NULL; struct net_device *ndev = rxe->ndev; + int skb_size; if (av->network_type == RXE_NETWORK_TYPE_IPV4) hdr_len = ETH_HLEN + sizeof(struct udphdr) + @@ -524,8 +559,18 @@ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, hdr_len = ETH_HLEN + sizeof(struct udphdr) + sizeof(struct ipv6hdr); - skb = alloc_skb(pkt->paylen + hdr_len + LL_RESERVED_SPACE(ndev), - GFP_ATOMIC); + skb_size = LL_RESERVED_SPACE(ndev) + hdr_len + pkt->paylen; + if (frag) { + if (rxe_can_use_sg(qp, pkt) && + (skb_size > RXE_MIN_SKB_SIZE)) { + skb_size = RXE_MIN_SKB_SIZE; + *frag = true; + } else { + *frag = false; + } + } + + skb = alloc_skb(skb_size, GFP_ATOMIC); if (unlikely(!skb)) goto out; @@ -539,6 +584,11 @@ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, else skb->protocol = htons(ETH_P_IPV6); + if (frag && *frag) + pkt->hdr = skb_put(skb, rxe_opcode[pkt->opcode].length); + else + pkt->hdr = skb_put(skb, pkt->paylen); + out: return skb; } diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 525e704c12c2..491360fef346 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -369,14 +369,12 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, pkt->pad + RXE_ICRC_SIZE; /* init skb */ - skb = rxe_init_packet(rxe, av, pkt); + skb = rxe_init_packet(qp, av, pkt, NULL); if (unlikely(!skb)) { err = -ENOMEM; goto err_out; } - pkt->hdr = skb_put(skb, pkt->paylen); - /* init roce headers */ rxe_init_roce_hdrs(qp, wqe, pkt); diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index a6c1d67ad943..254f2eab8d20 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -788,12 +788,10 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, ack->paylen = rxe_opcode[opcode].length + payload + ack->pad + RXE_ICRC_SIZE; - skb = rxe_init_packet(rxe, &qp->pri_av, ack); + skb = rxe_init_packet(qp, &qp->pri_av, ack, NULL); if (!skb) return NULL; - ack->hdr = skb_put(skb, ack->paylen); - bth_init(ack, opcode, 0, 0, ack->pad, IB_DEFAULT_PKEY_FULL, qp->attr.dest_qp_num, 0, psn); From patchwork Thu Jul 27 20:01:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13330663 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7EE91C04A94 for ; Thu, 27 Jul 2023 20:02:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230468AbjG0UCX (ORCPT ); Thu, 27 Jul 2023 16:02:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229980AbjG0UCV (ORCPT ); Thu, 27 Jul 2023 16:02:21 -0400 Received: from mail-ot1-x329.google.com (mail-ot1-x329.google.com [IPv6:2607:f8b0:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED8FC2D45 for ; Thu, 27 Jul 2023 13:02:17 -0700 (PDT) Received: by mail-ot1-x329.google.com with SMTP id 46e09a7af769-6b9f3b57c4fso1483035a34.1 for ; Thu, 27 Jul 2023 13:02:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690488136; x=1691092936; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Oq7+WVZco7GNhADjNeQjWPDPXgE7gQpJ1x+ADfhTl7A=; b=KRd6nivu7qI9IwOq8hRt8zFxkjjAjN0UP0vTzPOCRdWwuZQXsGnE0m9L4GaKitpRBb oc5W1ptEPuJseh9yU+Gn2onKgc2OiB6AtEwLxD8W4pUuU0rBNOpPOG7clM/Ff49qCsmi FI6uUGbcAa4QR2Zyza7Y+XlrdRs5pON7AUdHVORrCq8HDwAdqfY7r/Xt6z7ZFUeGSubH Ba9ZJJAyabLrIWVjZfmK1URQ3YFiKnUq9ZlVznet8uR0iwZWYYR0kZpjWp9sRBtl0t5A 2bCRmehUQzCAIUsG3uxS+6ZGi6KyatOLDTdKlzOXHPZwOzUI6E3QZEFjgGvC8WADULcO 5EPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690488136; x=1691092936; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Oq7+WVZco7GNhADjNeQjWPDPXgE7gQpJ1x+ADfhTl7A=; b=SBm1pllOh6iuVaim/wXwW0lwDFoEn+B5/VDT5bOw+W5/MIbqYQh6XbQnzZ9Onphxdx mvBK7DJ/cPo4LjBSRHU13/SDFUzepL9i7kl6FrMPkMwjFr5ZviLyedPGXyY+/xdYgczP 0aPM3+GFwADBLqYc98S3tmvWKPv8nVwymwpuvc0zAIH/PDERDlt2l95wA01L7lHWG3YF kcywcWM+sxhsqLhXEzN+0jSY9HziFguKLRqWsQjc52EvSQZBCheeleTJ0w3QEEbjK/7r lEfpdbvr996sS3YstQT+FDIsiklGt4ko/GAYeM/dz6Ks4o5iR4oA5RpGjD1QDQjgsxSv wAdg== X-Gm-Message-State: ABy/qLbedrv82IS7OGIfs2IBm62C9kHXHx5EyUdu50UDl6zVaFPWqT9c 0KLJwVT51rXaqcPZYV3Yj252NfXq1Gs= X-Google-Smtp-Source: APBJJlHtRfOQPKL+heq7+ro5xVaY1rm9BIGMsC6CULQ15sqY1AWaO0MeY7SsreD47Bhq6cmAUgjjcA== X-Received: by 2002:a9d:4d94:0:b0:6b9:a186:8115 with SMTP id u20-20020a9d4d94000000b006b9a1868115mr299152otk.18.1690488136717; Thu, 27 Jul 2023 13:02:16 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-a360-d7ee-0b00-a1d3.res6.spectrum.com. [2603:8081:140c:1a00:a360:d7ee:b00:a1d3]) by smtp.gmail.com with ESMTPSA id m3-20020a9d73c3000000b006b9acf5ebc0sm938142otk.76.2023.07.27.13.02.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 13:02:16 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org, jhack@hpe.com Cc: Bob Pearson Subject: [PATCH for-next v3 05/10] RDMA/rxe: Extend rxe_icrc.c to support frags Date: Thu, 27 Jul 2023 15:01:24 -0500 Message-Id: <20230727200128.65947-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727200128.65947-1-rpearsonhpe@gmail.com> References: <20230727200128.65947-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend the subroutines rxe_icrc_generate() and rxe_icrc_check() to support skb frags. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_icrc.c | 65 ++++++++++++++++++++++++---- drivers/infiniband/sw/rxe/rxe_net.c | 51 +++++++++++++++++----- drivers/infiniband/sw/rxe/rxe_recv.c | 1 + 3 files changed, 98 insertions(+), 19 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_icrc.c b/drivers/infiniband/sw/rxe/rxe_icrc.c index c9aa0995e900..393391863350 100644 --- a/drivers/infiniband/sw/rxe/rxe_icrc.c +++ b/drivers/infiniband/sw/rxe/rxe_icrc.c @@ -63,7 +63,7 @@ static __be32 rxe_crc32(struct rxe_dev *rxe, __be32 crc, void *next, size_t len) /** * rxe_icrc_hdr() - Compute the partial ICRC for the network and transport - * headers of a packet. + * headers of a packet. * @skb: packet buffer * @pkt: packet information * @@ -129,6 +129,56 @@ static __be32 rxe_icrc_hdr(struct sk_buff *skb, struct rxe_pkt_info *pkt) return crc; } +/** + * rxe_icrc_payload() - Compute the ICRC for a packet payload and also + * compute the address of the icrc in the packet. + * @skb: packet buffer + * @pkt: packet information + * @icrc: current icrc i.e. including headers + * @icrcp: returned pointer to icrc in skb + * + * Return: 0 if the values match else an error + */ +static __be32 rxe_icrc_payload(struct sk_buff *skb, struct rxe_pkt_info *pkt, + __be32 icrc, __be32 **icrcp) +{ + struct skb_shared_info *shinfo = skb_shinfo(skb); + skb_frag_t *frag; + u8 *addr; + int hdr_len; + int len; + int i; + + /* handle any payload left in the linear buffer */ + hdr_len = rxe_opcode[pkt->opcode].length; + addr = pkt->hdr + hdr_len; + len = skb_tail_pointer(skb) - skb_transport_header(skb) + - sizeof(struct udphdr) - hdr_len; + if (!shinfo->nr_frags) { + len -= RXE_ICRC_SIZE; + *icrcp = (__be32 *)(addr + len); + } + if (len > 0) + icrc = rxe_crc32(pkt->rxe, icrc, payload_addr(pkt), len); + WARN_ON(len < 0); + + /* handle any payload in frags */ + for (i = 0; i < shinfo->nr_frags; i++) { + frag = &shinfo->frags[i]; + addr = page_to_virt(frag->bv_page) + frag->bv_offset; + len = frag->bv_len; + if (i == shinfo->nr_frags - 1) { + len -= RXE_ICRC_SIZE; + *icrcp = (__be32 *)(addr + len); + } + if (len > 0) + icrc = rxe_crc32(pkt->rxe, icrc, addr, len); + WARN_ON(len < 0); + } + + return icrc; +} + /** * rxe_icrc_check() - Compute ICRC for a packet and compare to the ICRC * delivered in the packet. @@ -143,13 +193,11 @@ int rxe_icrc_check(struct sk_buff *skb, struct rxe_pkt_info *pkt) __be32 pkt_icrc; __be32 icrc; - icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE); - pkt_icrc = *icrcp; - icrc = rxe_icrc_hdr(skb, pkt); - icrc = rxe_crc32(pkt->rxe, icrc, (u8 *)payload_addr(pkt), - payload_size(pkt) + pkt->pad); + icrc = rxe_icrc_payload(skb, pkt, icrc, &icrcp); + icrc = ~icrc; + pkt_icrc = *icrcp; if (unlikely(icrc != pkt_icrc)) return -EINVAL; @@ -167,9 +215,8 @@ void rxe_icrc_generate(struct sk_buff *skb, struct rxe_pkt_info *pkt) __be32 *icrcp; __be32 icrc; - icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE); icrc = rxe_icrc_hdr(skb, pkt); - icrc = rxe_crc32(pkt->rxe, icrc, (u8 *)payload_addr(pkt), - payload_size(pkt) + pkt->pad); + icrc = rxe_icrc_payload(skb, pkt, icrc, &icrcp); + *icrcp = ~icrc; } diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index c44ef39010f1..c43f9dd3ae6e 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -148,33 +148,53 @@ static int rxe_udp_encap_recv(struct sock *sk, struct sk_buff *skb) struct udphdr *udph; struct rxe_dev *rxe; struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); + u8 opcode; + u8 buf[1]; + u8 *p; /* takes a reference on rxe->ib_dev * drop when skb is freed */ rxe = get_rxe_from_skb(skb); if (!rxe) - goto drop; + goto err_drop; - if (skb_linearize(skb)) { - ib_device_put(&rxe->ib_dev); - goto drop; + /* Get bth opcode out of skb, it may be in a fragment */ + p = skb_header_pointer(skb, sizeof(struct udphdr), 1, buf); + if (!p) + goto err_device_put; + opcode = *p; + + /* If using fragmented skbs make sure roce headers + * are in linear buffer else make skb linear + */ + if (rxe_use_sg && skb_is_nonlinear(skb)) { + int delta = rxe_opcode[opcode].length - + (skb_headlen(skb) - sizeof(struct udphdr)); + + if (delta > 0 && !__pskb_pull_tail(skb, delta)) + goto err_device_put; + } else { + if (skb_linearize(skb)) + goto err_device_put; } udph = udp_hdr(skb); pkt->rxe = rxe; pkt->port_num = 1; pkt->hdr = (u8 *)(udph + 1); - pkt->mask = RXE_GRH_MASK; + pkt->mask = rxe_opcode[opcode].mask | RXE_GRH_MASK; pkt->paylen = be16_to_cpu(udph->len) - sizeof(*udph); - /* remove udp header */ skb_pull(skb, sizeof(struct udphdr)); rxe_rcv(skb); return 0; -drop: + +err_device_put: + ib_device_put(&rxe->ib_dev); +err_drop: kfree_skb(skb); return 0; @@ -446,24 +466,35 @@ static int rxe_send(struct sk_buff *skb, struct rxe_pkt_info *pkt) */ static int rxe_loopback(struct sk_buff *skb, struct rxe_pkt_info *pkt) { - memcpy(SKB_TO_PKT(skb), pkt, sizeof(*pkt)); + struct rxe_pkt_info *newpkt; + int err; + /* make loopback line up with rxe_udp_encap_recv */ if (skb->protocol == htons(ETH_P_IP)) skb_pull(skb, sizeof(struct iphdr)); else skb_pull(skb, sizeof(struct ipv6hdr)); + skb_reset_transport_header(skb); + + newpkt = SKB_TO_PKT(skb); + memcpy(newpkt, pkt, sizeof(*newpkt)); + newpkt->hdr = skb_transport_header(skb) + sizeof(struct udphdr); if (WARN_ON(!ib_device_try_get(&pkt->rxe->ib_dev))) { kfree_skb(skb); - return -EIO; + err = -EINVAL; + goto drop; } /* remove udp header */ skb_pull(skb, sizeof(struct udphdr)); rxe_rcv(skb); - return 0; + +drop: + kfree_skb(skb); + return err; } int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index f912a913f89a..940197199252 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -338,6 +338,7 @@ void rxe_rcv(struct sk_buff *skb) if (unlikely(err)) goto drop; + /* skb->data points at UDP header */ err = rxe_icrc_check(skb, pkt); if (unlikely(err)) goto drop; From patchwork Thu Jul 27 20:01:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13330662 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13572EB64DD for ; Thu, 27 Jul 2023 20:02:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232549AbjG0UCX (ORCPT ); Thu, 27 Jul 2023 16:02:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230468AbjG0UCV (ORCPT ); Thu, 27 Jul 2023 16:02:21 -0400 Received: from mail-ot1-x331.google.com (mail-ot1-x331.google.com [IPv6:2607:f8b0:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A557B30E4 for ; Thu, 27 Jul 2023 13:02:18 -0700 (PDT) Received: by mail-ot1-x331.google.com with SMTP id 46e09a7af769-6b9b52724ccso1148106a34.1 for ; Thu, 27 Jul 2023 13:02:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690488137; x=1691092937; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gu2AzmnhmIAxsIEl63itDqucf1tROo4Hpkt+Wu5kh0k=; b=OY/QVg+2iR66q77HxDKF0PLVggff7dpPKcks1j3EGmZn2b29u8B/MD+rWetzlMtV9G NSVRYxOkaEHjsbCtAcUZHWSTEsdzNhdgmnmQhiHbHHr4DYLgmKd7k88qe8i5a/NF1qRe lRZ8od8uuOMIhfPnCJbnZ2SqvRbuOFyShhHmLWXbUNaHza6KS/ilYvLM/ktctSX9vfDQ TB73NZEJgtxkURtYPLB59DgVlZS4ZsP7de81fgZylrghxML7LbDSFJSDeDTPYhnIsgJx PcitG4B/WjrgmKpK7P3VWUY8+DTBWMxWRwU0GpT61X5whTG7VoxS9cvFnThYK5Mxo57w SwhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690488137; x=1691092937; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gu2AzmnhmIAxsIEl63itDqucf1tROo4Hpkt+Wu5kh0k=; b=jXbI/Abbvo9CBlO/3jVvwqJGzx4imIEhgI2lAK1ERu1Uz/Tw7aZDNVu8NNVqDrWdaq ccvNJVdbHI3FZCYidUErvbBbpVi7dnEbRFbmBTYsTuxtsnNgusB5peM+5Renund/RHsw A/BZKgoD340nlMrYgy6bdp8usGqQI0RdU56Y6EIxrDrFF8PTN6cN7vJDXLe+2tVBD1yZ wHKlk/RSlo7AAAgCVQN1m/MC/399z7zKAQK4VbfSrYsIvW2A1ZRsJ+m7+PMP+Xpv6VsG YnjYwzuNNdo7qdDx9hOEFSsyVFeaAlXAbw1c31xZITITXkDuJsynLE4b2E7HmU5x63AB yBMg== X-Gm-Message-State: ABy/qLYm2bLgLyZSDdkUwubVRQsai1AK7lEznMzXnw4Oa2vecqr4BWf/ HxUNjdsXq0nk53nY8Yi2isc= X-Google-Smtp-Source: APBJJlEvnwGW2/IL8E1JptvRWvALAUMc2LivVN0T/8+KsNYqpWGJ3UqbuIIQxkGeWzZJi+hJz2ozaA== X-Received: by 2002:a05:6830:1e0b:b0:6b7:539f:d1b0 with SMTP id s11-20020a0568301e0b00b006b7539fd1b0mr174975otr.31.1690488137575; Thu, 27 Jul 2023 13:02:17 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-a360-d7ee-0b00-a1d3.res6.spectrum.com. [2603:8081:140c:1a00:a360:d7ee:b00:a1d3]) by smtp.gmail.com with ESMTPSA id m3-20020a9d73c3000000b006b9acf5ebc0sm938142otk.76.2023.07.27.13.02.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 13:02:17 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org, jhack@hpe.com Cc: Bob Pearson Subject: [PATCH for-next v3 06/10] RDMA/rxe: Extend rxe_init_req_packet() for frags Date: Thu, 27 Jul 2023 15:01:25 -0500 Message-Id: <20230727200128.65947-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727200128.65947-1-rpearsonhpe@gmail.com> References: <20230727200128.65947-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add code to rxe_build_req_packet() to allocate space for the pad and icrc if the skb is fragmented. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 5 ++ drivers/infiniband/sw/rxe/rxe_mr.c | 5 +- drivers/infiniband/sw/rxe/rxe_opcode.c | 2 + drivers/infiniband/sw/rxe/rxe_req.c | 83 ++++++++++++++++++++++---- 4 files changed, 84 insertions(+), 11 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 96b1fb79610a..40624de62288 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -177,7 +177,12 @@ void rxe_srq_cleanup(struct rxe_pool_elem *elem); void rxe_dealloc(struct ib_device *ib_dev); int rxe_completer(struct rxe_qp *qp); + +/* rxe_req.c */ +int rxe_prepare_pad_icrc(struct rxe_pkt_info *pkt, struct sk_buff *skb, + int payload, bool frag); int rxe_requester(struct rxe_qp *qp); + int rxe_responder(struct rxe_qp *qp); /* rxe_icrc.c */ diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 0ac71238599a..5178775f2d4e 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -263,7 +263,10 @@ int rxe_add_frag(struct sk_buff *skb, struct rxe_mr *mr, struct page *page, skb_frag_t *frag = &skb_shinfo(skb)->frags[nr_frags]; if (nr_frags >= MAX_SKB_FRAGS) { - rxe_dbg_mr(mr, "ran out of frags"); + if (mr) + rxe_dbg_mr(mr, "ran out of frags"); + else + rxe_dbg("ran out of frags"); return -EINVAL; } diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c index f358b732a751..a72e5fd4f571 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.c +++ b/drivers/infiniband/sw/rxe/rxe_opcode.c @@ -399,6 +399,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [RXE_BTH] = 0, [RXE_FETH] = RXE_BTH_BYTES, [RXE_RETH] = RXE_BTH_BYTES + RXE_FETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + RXE_FETH_BYTES + + RXE_RETH_BYTES, } }, [IB_OPCODE_RC_ATOMIC_WRITE] = { diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 491360fef346..cf34d1a58f85 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -316,26 +316,83 @@ static void rxe_init_roce_hdrs(struct rxe_qp *qp, struct rxe_send_wqe *wqe, static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe, struct rxe_pkt_info *pkt, u32 payload, - struct sk_buff *skb) + struct sk_buff *skb, bool frag) { + int len = skb_tailroom(skb); + int tot_len = payload + pkt->pad + RXE_ICRC_SIZE; + int access = 0; int skb_offset = 0; + int op; + void *addr; void *data; int err = 0; if (wqe->wr.send_flags & IB_SEND_INLINE) { + if (WARN_ON(frag)) { + rxe_err_qp(qp, "inline data for fragmented skb not supported"); + return -EINVAL; + } + if (len < tot_len) { + rxe_err_qp(qp, "skb too small"); + return -EINVAL; + } data = &wqe->dma.inline_data[wqe->dma.sge_offset]; memcpy(payload_addr(pkt), data, payload); wqe->dma.resid -= payload; wqe->dma.sge_offset += payload; } else { - err = rxe_copy_dma_data(skb, qp->pd, 0, &wqe->dma, - payload_addr(pkt), skb_offset, - payload, RXE_COPY_FROM_MR); + op = frag ? RXE_FRAG_FROM_MR : RXE_COPY_FROM_MR; + addr = frag ? NULL : payload_addr(pkt); + err = rxe_copy_dma_data(skb, qp->pd, access, &wqe->dma, + addr, skb_offset, payload, op); } return err; } +/** + * rxe_prepare_pad_icrc() - Alloc space if fragmented and init pad and icrc + * @pkt: packet info + * @skb: packet buffer + * @payload: roce payload + * @frag: true if skb is fragmented + * + * Returns: 0 on success else an error + */ +int rxe_prepare_pad_icrc(struct rxe_pkt_info *pkt, struct sk_buff *skb, + int payload, bool frag) +{ + unsigned int length = RXE_ICRC_SIZE + pkt->pad; + unsigned int offset; + struct page *page; + u64 iova; + u8 *addr; + + if (frag) { + addr = skb_end_pointer(skb) - length; + iova = (uintptr_t)addr; + page = virt_to_page(iova); + offset = iova & (PAGE_SIZE - 1); + + /* make sure we have enough room and frag + * doesn't cross page boundary should never + * happen + */ + if (WARN_ON(((skb->end - skb->tail) <= length) || + ((offset + length) > PAGE_SIZE))) + return -ENOMEM; + + memset(addr, 0, length); + + return rxe_add_frag(skb, NULL, page, length, offset); + } + + addr = payload_addr(pkt) + payload; + memset(addr, 0, length); + + return 0; +} + static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe, int opcode, u32 payload, @@ -345,7 +402,7 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, struct sk_buff *skb = NULL; struct rxe_av *av; struct rxe_ah *ah = NULL; - u8 *pad_addr; + bool frag = false; int err; pkt->rxe = rxe; @@ -380,9 +437,13 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, /* init payload if any */ if (pkt->mask & RXE_WRITE_OR_SEND_MASK) { - err = rxe_init_payload(qp, wqe, pkt, payload, skb); - if (unlikely(err)) + err = rxe_init_payload(qp, wqe, pkt, payload, + skb, frag); + if (unlikely(err)) { + rxe_dbg_qp(qp, "rxe_init_payload failed, err = %d", + err); goto err_out; + } } else if (pkt->mask & RXE_FLUSH_MASK) { /* oA19-2: shall have no payload. */ wqe->dma.resid = 0; @@ -394,9 +455,11 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, } /* init pad and icrc */ - if (pkt->pad) { - pad_addr = payload_addr(pkt) + payload; - memset(pad_addr, 0, pkt->pad); + err = rxe_prepare_pad_icrc(pkt, skb, payload, frag); + if (unlikely(err)) { + rxe_dbg_qp(qp, "rxe_prepare_pad_icrc failed, err = %d", + err); + goto err_out; } /* init IP and UDP network headers */ From patchwork Thu Jul 27 20:01:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13330664 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4273DC0015E for ; Thu, 27 Jul 2023 20:02:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229980AbjG0UCX (ORCPT ); Thu, 27 Jul 2023 16:02:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232411AbjG0UCV (ORCPT ); Thu, 27 Jul 2023 16:02:21 -0400 Received: from mail-ot1-x32a.google.com (mail-ot1-x32a.google.com [IPv6:2607:f8b0:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44C452D7D for ; Thu, 27 Jul 2023 13:02:19 -0700 (PDT) Received: by mail-ot1-x32a.google.com with SMTP id 46e09a7af769-6bb31245130so1123660a34.1 for ; Thu, 27 Jul 2023 13:02:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690488138; x=1691092938; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=o+baivTLirFTZrY167Wk8lLnbiCM4lAjA1NOD55KQwc=; b=K+cQQX4nXz64l5sQcyzPaIPX1r4wGiu1hNTlmbeGO2Un4vltLitKxVw13hRo8tgqZx R6GuhDgA3yw5PxRWoFUOINgKtucMnKg3d3XWbfHCM+gvUM/5zCZQGVCf0waZ4cz4h4j2 9lcr5NQVLSo4cQN5+M9iXYwDBuvRqzURwYcKoH3prd59itKYJPzUqnvHCM9m44zZQiaY O9wtp9DuES0BZioCn6Zr+tRRIgWKnGZHUfEBvMANtQ1/4MKYZ7VaDJLuwOBomVatTc2i S1zXwFyIewknJtTWijVevyWl8p2rCxdvJRqx3SjTSLu+dKo60ze4C8htwMLHIhMHZ/ds wOBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690488138; x=1691092938; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=o+baivTLirFTZrY167Wk8lLnbiCM4lAjA1NOD55KQwc=; b=C9tLlvXDPWiJs7buwQ8zMjPmZy+B5q7WAUd+dQcOYj1gnBSAueOr/0j5DmV54m0s9+ HJceeSuZTdV0JsPvH5aKJjUMORlyuDXBvI9QoaVc/zYJf1Ozj3i0JGjxv7obDOLexiWi uYAUUhgxXxXJQTMJnA9q+AkvfMJhwm6CIGejHeAIkuGVybMNlk8L7Q8Y+1U8REDwSKpc G/0sopU1xkIGaddXsNfvVKysutLajeKT/GeLy65viJRPZ/5cwZeo/TSJdHJuPYZWrDT4 hz9vIhM4/ci1iwzfnMMNr+aTEc4NMa0Qw94WH2wessdu9pV9mQlzytWO/V+mG8A65wrg XRwQ== X-Gm-Message-State: ABy/qLbO5xhRfOWK375XiKRH11RsWF4J4loqTie0Nkgzj/2KeWVSFfq8 oy/tfAjkCS5JN1GJh867TOzCqCOLBEQ= X-Google-Smtp-Source: APBJJlFys0pVHk2K+UMhKYjKdLPI7igTearNeL3UlX3L+N5Qc1FCxQvWT99u3c5b0tiR3ZUgKLKt1A== X-Received: by 2002:a05:6830:10e:b0:6b9:1af3:3307 with SMTP id i14-20020a056830010e00b006b91af33307mr133010otp.17.1690488138547; Thu, 27 Jul 2023 13:02:18 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-a360-d7ee-0b00-a1d3.res6.spectrum.com. [2603:8081:140c:1a00:a360:d7ee:b00:a1d3]) by smtp.gmail.com with ESMTPSA id m3-20020a9d73c3000000b006b9acf5ebc0sm938142otk.76.2023.07.27.13.02.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 13:02:18 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org, jhack@hpe.com Cc: Bob Pearson Subject: [PATCH for-next v3 07/10] RDMA/rxe: Extend response packets for frags Date: Thu, 27 Jul 2023 15:01:26 -0500 Message-Id: <20230727200128.65947-8-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727200128.65947-1-rpearsonhpe@gmail.com> References: <20230727200128.65947-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend prepare_ack_packet(), read_reply() and send_common_ack() in rxe_resp.c to support fragmented skbs. Adjust calls to these routines for the changed API. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_resp.c | 59 ++++++++++++++++++---------- 1 file changed, 38 insertions(+), 21 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 254f2eab8d20..dc62e11dc448 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -765,14 +765,11 @@ static enum resp_states atomic_write_reply(struct rxe_qp *qp, static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, struct rxe_pkt_info *ack, - int opcode, - int payload, - u32 psn, - u8 syndrome) + int opcode, int payload, u32 psn, + u8 syndrome, bool *fragp) { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct sk_buff *skb; - int err; ack->rxe = rxe; ack->qp = qp; @@ -788,7 +785,7 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, ack->paylen = rxe_opcode[opcode].length + payload + ack->pad + RXE_ICRC_SIZE; - skb = rxe_init_packet(qp, &qp->pri_av, ack, NULL); + skb = rxe_init_packet(qp, &qp->pri_av, ack, fragp); if (!skb) return NULL; @@ -803,12 +800,6 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, if (ack->mask & RXE_ATMACK_MASK) atmack_set_orig(ack, qp->resp.res->atomic.orig_val); - err = rxe_prepare(&qp->pri_av, ack, skb); - if (err) { - kfree_skb(skb); - return NULL; - } - return skb; } @@ -881,7 +872,8 @@ static enum resp_states read_reply(struct rxe_qp *qp, struct resp_res *res = qp->resp.res; struct rxe_mr *mr; unsigned int skb_offset = 0; - u8 *pad_addr; + enum rxe_mr_copy_op op; + bool frag; if (!res) { res = rxe_prepare_res(qp, req_pkt, RXE_READ_MASK); @@ -898,8 +890,10 @@ static enum resp_states read_reply(struct rxe_qp *qp, qp->resp.mr = NULL; } else { mr = rxe_recheck_mr(qp, res->read.rkey); - if (!mr) - return RESPST_ERR_RKEY_VIOLATION; + if (!mr) { + state = RESPST_ERR_RKEY_VIOLATION; + goto err_out; + } } if (res->read.resid <= mtu) @@ -926,23 +920,33 @@ static enum resp_states read_reply(struct rxe_qp *qp, payload = min_t(int, res->read.resid, mtu); skb = prepare_ack_packet(qp, &ack_pkt, opcode, payload, - res->cur_psn, AETH_ACK_UNLIMITED); + res->cur_psn, AETH_ACK_UNLIMITED, &frag); if (!skb) { state = RESPST_ERR_RNR; goto err_out; } + op = frag ? RXE_FRAG_FROM_MR : RXE_COPY_FROM_MR; err = rxe_copy_mr_data(skb, mr, res->read.va, payload_addr(&ack_pkt), - skb_offset, payload, RXE_COPY_FROM_MR); + skb_offset, payload, op); if (err) { kfree_skb(skb); state = RESPST_ERR_RKEY_VIOLATION; goto err_out; } - if (ack_pkt.pad) { - pad_addr = payload_addr(&ack_pkt) + payload; - memset(pad_addr, 0, ack_pkt.pad); + err = rxe_prepare_pad_icrc(&ack_pkt, skb, payload, frag); + if (err) { + kfree_skb(skb); + state = RESPST_ERR_RNR; + goto err_out; + } + + err = rxe_prepare(&qp->pri_av, &ack_pkt, skb); + if (err) { + kfree_skb(skb); + state = RESPST_ERR_RNR; + goto err_out; } /* rxe_xmit_packet always consumes the skb */ @@ -1177,10 +1181,23 @@ static int send_common_ack(struct rxe_qp *qp, u8 syndrome, u32 psn, struct rxe_pkt_info ack_pkt; struct sk_buff *skb; - skb = prepare_ack_packet(qp, &ack_pkt, opcode, 0, psn, syndrome); + skb = prepare_ack_packet(qp, &ack_pkt, opcode, 0, psn, + syndrome, NULL); if (!skb) return -ENOMEM; + err = rxe_prepare_pad_icrc(&ack_pkt, skb, 0, false); + if (err) { + kfree_skb(skb); + return err; + } + + err = rxe_prepare(&qp->pri_av, &ack_pkt, skb); + if (err) { + kfree_skb(skb); + return err; + } + err = rxe_xmit_packet(qp, &ack_pkt, skb); if (err) rxe_dbg_qp(qp, "Failed sending %s\n", msg); From patchwork Thu Jul 27 20:01:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13330665 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D84AC04E69 for ; Thu, 27 Jul 2023 20:02:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232438AbjG0UCY (ORCPT ); Thu, 27 Jul 2023 16:02:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232552AbjG0UCX (ORCPT ); Thu, 27 Jul 2023 16:02:23 -0400 Received: from mail-oa1-x2d.google.com (mail-oa1-x2d.google.com [IPv6:2001:4860:4864:20::2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5ED4730F3 for ; Thu, 27 Jul 2023 13:02:20 -0700 (PDT) Received: by mail-oa1-x2d.google.com with SMTP id 586e51a60fabf-1b055510c9dso980845fac.0 for ; Thu, 27 Jul 2023 13:02:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690488139; x=1691092939; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0hpwpc8oG7KJBxYHBS4inovsxR8KwRPSpA9l/aOKcvc=; b=jqio+igS0yqZskF4WhWuhtYT7ra4wN7NnqgbBLWl8pKUSU+ziN+6+lEX0tCIZVV3Ur PcBhw2bOyDgapp5IPL+ajDKMCRVOKsRU3Q/rTbfTESbzGPAhhVlxNs5fhcWX6UIGqRMr GWnu0iUgnhpuL2hBXB5zZrLJY0i9GfFdxFcnbxCJx9RyBixnFHo6GTIDGcQwPUkjP3kZ BqRSwW96M58LHgmCtTIADRs5mzk+4wFPCLu83uKDJckcNKHGnYzmylc11NcSmYXVwWS0 Ad1cW8IMsI599uohYBmlVNlYxQW+o8ZxPpaXCpj2pHikJjGd3dsQ2iyVfAMCUi1IQD4s o4/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690488139; x=1691092939; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0hpwpc8oG7KJBxYHBS4inovsxR8KwRPSpA9l/aOKcvc=; b=FFKZtjOUVi01yh44jR/zDi5uD1I99E0NQT2DDSySEcRLCP/2vQUXcN2j9ff+vmYhZu JMulSW2TbPDg31VsZIxLDphUFJ9ZU3p6iDMfmZK94r/q48M/IEGBBvtiFtwb/c1wfasu Gx1YxD+a+QJcA+t6vfWXcdH69FgvX44C62NMlmgHvZeHO3k6pYqo08TbhXplbiCnIg7H x+/yzld/bSeXZ047VJFhtLliLPm2IAs+0VYfy6MHmE5GDKVzRA9luHqzvL+IwR+IS7Ic u8ohujR31NHhGhTWSKlgoTEbNuKg0wqf8LqSbwP+yanpNQts2XYlXOldZi44FZ3JHKlg hrrQ== X-Gm-Message-State: ABy/qLb4sPWzlt/0+DrWgWPu9OY/6AU1fNlPopmX2K8Rt0vew1pVO2rL MoD52LABNuv0xCBw6j5TEdc= X-Google-Smtp-Source: APBJJlG4ABfIb7pLuQCfZBIlERkm9TV3zDb6sd1mk/5zbg2aPR697aEFWsuxPa3v60V/Ud/ihYaGeQ== X-Received: by 2002:a05:6870:41d0:b0:1bb:9c0f:2e58 with SMTP id z16-20020a05687041d000b001bb9c0f2e58mr511810oac.40.1690488139518; Thu, 27 Jul 2023 13:02:19 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-a360-d7ee-0b00-a1d3.res6.spectrum.com. [2603:8081:140c:1a00:a360:d7ee:b00:a1d3]) by smtp.gmail.com with ESMTPSA id m3-20020a9d73c3000000b006b9acf5ebc0sm938142otk.76.2023.07.27.13.02.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 13:02:19 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org, jhack@hpe.com Cc: Bob Pearson Subject: [PATCH for-next v3 08/10] RDMA/rxe: Extend send/write_data_in() for frags Date: Thu, 27 Jul 2023 15:01:27 -0500 Message-Id: <20230727200128.65947-9-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727200128.65947-1-rpearsonhpe@gmail.com> References: <20230727200128.65947-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend send_data_in() and write_data_in() in rxe_resp.c to support fragmented received skbs. This is in preparation for using fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_resp.c | 102 +++++++++++++++++---------- 1 file changed, 64 insertions(+), 38 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index dc62e11dc448..c7153e376987 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -559,45 +559,88 @@ static enum resp_states check_rkey(struct rxe_qp *qp, return state; } -static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, - int data_len) +/** + * rxe_send_data_in() - Copy payload data into receive buffer + * @qp: The queue pair + * @pkt: Request packet info + * + * Copy the packet payload into the receive buffer at the current offset. + * If a UD message also copy the IP header into the receive buffer. + * + * Returns: 0 if successful else an error resp_states value. + */ +static enum resp_states rxe_send_data_in(struct rxe_qp *qp, + struct rxe_pkt_info *pkt) { - struct sk_buff *skb = NULL; + struct sk_buff *skb = PKT_TO_SKB(pkt); + u8 *data_addr = payload_addr(pkt); + int data_len = payload_size(pkt); + union rdma_network_hdr hdr; + enum rxe_mr_copy_op op; int skb_offset = 0; int err; + /* Per IBA for UD packets copy the IP header into the receive buffer */ + if (qp_type(qp) == IB_QPT_UD || qp_type(qp) == IB_QPT_GSI) { + if (skb->protocol == htons(ETH_P_IP)) { + memset(&hdr.reserved, 0, sizeof(hdr.reserved)); + memcpy(&hdr.roce4grh, ip_hdr(skb), sizeof(hdr.roce4grh)); + } else { + memcpy(&hdr.ibgrh, ipv6_hdr(skb), sizeof(hdr)); + } + err = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, + &qp->resp.wqe->dma, &hdr, skb_offset, + sizeof(hdr), RXE_COPY_TO_MR); + if (err) + goto err_out; + } + + op = skb_is_nonlinear(skb) ? RXE_FRAG_TO_MR : RXE_COPY_TO_MR; + /* offset to payload from skb->data (= &bth header) */ + skb_offset = rxe_opcode[pkt->opcode].length; err = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, data_addr, - skb_offset, data_len, RXE_COPY_TO_MR); - if (unlikely(err)) - return (err == -ENOSPC) ? RESPST_ERR_LENGTH - : RESPST_ERR_MALFORMED_WQE; + skb_offset, data_len, op); + if (err) + goto err_out; return RESPST_NONE; + +err_out: + return (err == -ENOSPC) ? RESPST_ERR_LENGTH + : RESPST_ERR_MALFORMED_WQE; } -static enum resp_states write_data_in(struct rxe_qp *qp, - struct rxe_pkt_info *pkt) +/** + * rxe_write_data_in() - Copy payload data to iova + * @qp: The queue pair + * @pkt: Request packet info + * + * Copy the packet payload to current iova and update iova. + * + * Returns: 0 if successful else an error resp_states value. + */ +static enum resp_states rxe_write_data_in(struct rxe_qp *qp, + struct rxe_pkt_info *pkt) { struct sk_buff *skb = PKT_TO_SKB(pkt); - enum resp_states rc = RESPST_NONE; + u8 *data_addr = payload_addr(pkt); int data_len = payload_size(pkt); + enum rxe_mr_copy_op op; + int skb_offset; int err; - int skb_offset = 0; + op = skb_is_nonlinear(skb) ? RXE_FRAG_TO_MR : RXE_COPY_TO_MR; + skb_offset = rxe_opcode[pkt->opcode].length; err = rxe_copy_mr_data(skb, qp->resp.mr, qp->resp.va + qp->resp.offset, - payload_addr(pkt), skb_offset, data_len, - RXE_COPY_TO_MR); - if (err) { - rc = RESPST_ERR_RKEY_VIOLATION; - goto out; - } + data_addr, skb_offset, data_len, op); + if (err) + return RESPST_ERR_RKEY_VIOLATION; qp->resp.va += data_len; qp->resp.resid -= data_len; -out: - return rc; + return RESPST_NONE; } static struct resp_res *rxe_prepare_res(struct rxe_qp *qp, @@ -991,30 +1034,13 @@ static int invalidate_rkey(struct rxe_qp *qp, u32 rkey) static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { enum resp_states err; - struct sk_buff *skb = PKT_TO_SKB(pkt); - union rdma_network_hdr hdr; if (pkt->mask & RXE_SEND_MASK) { - if (qp_type(qp) == IB_QPT_UD || - qp_type(qp) == IB_QPT_GSI) { - if (skb->protocol == htons(ETH_P_IP)) { - memset(&hdr.reserved, 0, - sizeof(hdr.reserved)); - memcpy(&hdr.roce4grh, ip_hdr(skb), - sizeof(hdr.roce4grh)); - err = send_data_in(qp, &hdr, sizeof(hdr)); - } else { - err = send_data_in(qp, ipv6_hdr(skb), - sizeof(hdr)); - } - if (err) - return err; - } - err = send_data_in(qp, payload_addr(pkt), payload_size(pkt)); + err = rxe_send_data_in(qp, pkt); if (err) return err; } else if (pkt->mask & RXE_WRITE_MASK) { - err = write_data_in(qp, pkt); + err = rxe_write_data_in(qp, pkt); if (err) return err; } else if (pkt->mask & RXE_READ_MASK) { From patchwork Thu Jul 27 20:01:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13330666 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D089CEB64DD for ; Thu, 27 Jul 2023 20:02:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232411AbjG0UCe (ORCPT ); Thu, 27 Jul 2023 16:02:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232220AbjG0UCc (ORCPT ); Thu, 27 Jul 2023 16:02:32 -0400 Received: from mail-ot1-x336.google.com (mail-ot1-x336.google.com [IPv6:2607:f8b0:4864:20::336]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07C0F30CD for ; Thu, 27 Jul 2023 13:02:21 -0700 (PDT) Received: by mail-ot1-x336.google.com with SMTP id 46e09a7af769-6b9edef7993so1088542a34.2 for ; Thu, 27 Jul 2023 13:02:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690488140; x=1691092940; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jv4JZdZiF1836Z7KBUUppYgX/3nVIYsPvkge6EqPA74=; b=LTvlSpOyHMHCuRZNu9OPkvt6EPLcsQ3e1DkNP0tyy6vSUnIGPax6q76Jb7Sj2jEROF I3twL1+cVlcujFwQHvEwCE9pIMA1Jq2yRWSoHxbpgWgo9/FJCHwTUp+JHANwEHj8YQZc Y5IZv4Dmzc67qeY1KKtfpRg+zvavfmH04i95bNAAIBqcElHgX1mmLrD5c+UfxadGKAwp sDgzzQf5fJr+EWTXP/VBoMYnf0Y+zF9RmJvK7xJIDXBRo1mxgnAqU2IodFavmCEYMxVP hTfpEUqvHdX8KtygLT4FGVHpEb+f7h/M7BuFGZKy7mEI0EZp3blDvT2M4PxiZufOZBYI /yFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690488140; x=1691092940; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jv4JZdZiF1836Z7KBUUppYgX/3nVIYsPvkge6EqPA74=; b=jvH8TbDbLBQroIFT6kFr+dW7F8oYvRgWI0dWkHinXbhU/NLm4HSltvmXIhA83ACAoO zpHo2NCBFpkMjIOIPSrkTXnIP7/zxtt8+sB2zKgWKyrU7jCuVgXy5JxxSbi8zZl+V0Ov 1dgy+GcY5vmR4F2sMGQcgN0BB5cQTdwTbSPF0wuIoGf4EJ2Tm5aPGoxa0sFrd6zGWgwp 1y2fCyw/xceaqJTe2CBEd5UVxA95VmUYaKmFxl2RrxQypfkUWT6QyPGyYmww5/3GKaR5 xD2116ghhKenIIByZ2Hkv9X5GrH6+/0JKG57AqMEW0lgCAW9QviWSashVMj3Fevkt4iy vKEw== X-Gm-Message-State: ABy/qLa5X+7nsem7BeYd3VKHE6g2CKPVoKqljowsEwhcRN6Mz443s5Zd g2Mi8+tim+p86m0lm36c+g8= X-Google-Smtp-Source: APBJJlGbH5NgePVFcdeie2VMMu6MThpK5V1nGnxEwI9h8NzR2AC9rhM0ugSNydMHcjaembZh9SZNtw== X-Received: by 2002:a05:6870:4396:b0:1b0:5bf7:3bb6 with SMTP id r22-20020a056870439600b001b05bf73bb6mr540493oah.28.1690488140656; Thu, 27 Jul 2023 13:02:20 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-a360-d7ee-0b00-a1d3.res6.spectrum.com. [2603:8081:140c:1a00:a360:d7ee:b00:a1d3]) by smtp.gmail.com with ESMTPSA id m3-20020a9d73c3000000b006b9acf5ebc0sm938142otk.76.2023.07.27.13.02.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 13:02:20 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org, jhack@hpe.com Cc: Bob Pearson Subject: [PATCH for-next v3 09/10] RDMA/rxe: Extend do_read() in rxe_comp.c for frags Date: Thu, 27 Jul 2023 15:01:28 -0500 Message-Id: <20230727200128.65947-10-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727200128.65947-1-rpearsonhpe@gmail.com> References: <20230727200128.65947-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend do_read() in rxe_comp.c to support fragmented skbs. Rename rxe_do_read(). Adjust caller's API. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 39 ++++++++++++++++++---------- 1 file changed, 26 insertions(+), 13 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index 670ee08f6f5a..ecaaed15c4eb 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -360,22 +360,35 @@ static inline enum comp_state check_ack(struct rxe_qp *qp, return COMPST_ERROR; } -static inline enum comp_state do_read(struct rxe_qp *qp, - struct rxe_pkt_info *pkt, - struct rxe_send_wqe *wqe) +/** + * rxe_do_read() - Process read reply packet + * @qp: The queue pair + * @pkt: Packet info + * @wqe: The current work request + * + * Copy payload from incoming read reply packet into current + * iova. + * + * Returns: 0 on success else an error comp_state + */ +static inline enum comp_state rxe_do_read(struct rxe_qp *qp, + struct rxe_pkt_info *pkt, + struct rxe_send_wqe *wqe) { struct sk_buff *skb = PKT_TO_SKB(pkt); - int skb_offset = 0; - int ret; + u8 *data_addr = payload_addr(pkt); + int data_len = payload_size(pkt); + enum rxe_mr_copy_op op; + int skb_offset; + int err; - ret = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, - &wqe->dma, payload_addr(pkt), - skb_offset, payload_size(pkt), - RXE_COPY_TO_MR); - if (ret) { - wqe->status = IB_WC_LOC_PROT_ERR; + op = skb_is_nonlinear(skb) ? RXE_FRAG_TO_MR : RXE_COPY_TO_MR; + skb_offset = rxe_opcode[pkt->opcode].length; + err = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, + &wqe->dma, data_addr, + skb_offset, data_len, op); + if (err) return COMPST_ERROR; - } if (wqe->dma.resid == 0 && (pkt->mask & RXE_END_MASK)) return COMPST_COMP_ACK; @@ -704,7 +717,7 @@ int rxe_completer(struct rxe_qp *qp) break; case COMPST_READ: - state = do_read(qp, pkt, wqe); + state = rxe_do_read(qp, pkt, wqe); break; case COMPST_ATOMIC: From patchwork Thu Jul 27 20:01:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13330667 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9ED2C41513 for ; Thu, 27 Jul 2023 20:02:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231701AbjG0UCf (ORCPT ); Thu, 27 Jul 2023 16:02:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40938 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231304AbjG0UCe (ORCPT ); Thu, 27 Jul 2023 16:02:34 -0400 Received: from mail-ot1-x32c.google.com (mail-ot1-x32c.google.com [IPv6:2607:f8b0:4864:20::32c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 055F1358D for ; Thu, 27 Jul 2023 13:02:22 -0700 (PDT) Received: by mail-ot1-x32c.google.com with SMTP id 46e09a7af769-6bb31245130so1123766a34.1 for ; Thu, 27 Jul 2023 13:02:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690488142; x=1691092942; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=m+nZlg9ewOYl0jUwwq9E6BgOMSB9nsqE9ZhNzMzoYko=; b=rYH4N/j3fqWMhXoO7TDBhJwXz9MbkpdOEnhZF/NOKxWJNScvpReuRVxsU/xd5HSGHz vXUdVqZJqXHdlefyX6JBn1jASN2LFfgU3YPm7Iv8q2NNGYjZp1flWmSLlets4E59St6S dkR/b9k7ILJJgHgaAqw4Zcm9gQ9yvWtYyzFi+FzlSGZTGdU/fd3lHQy+fQLdzFge/yxV Hk2rvfWIy7lUYWHVAM9GbrhPhobcgJ1JwskF9hsXgO7kECewbukcgmRoPV9WUip/aFjc 4gzsQ02JrzQXe+WV9eaacCz1pekBs4zuzvL60iL+1B3d6kvpJ62zVNy/8tB1gQhtPvYk Egmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690488142; x=1691092942; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=m+nZlg9ewOYl0jUwwq9E6BgOMSB9nsqE9ZhNzMzoYko=; b=hoQdC5m64fAT0fU8GNQB+W0BAjD7o5XuDwrQ0Cp084B13YUT+b40swYadeL6AuFBAI ipbHBh7/hxywVNQVYWQ2VzelzBpJP2idGR5MF6IiYm30XByQMG/YLxKAlphr4pmVmwu4 gtlzvnemcgvXQMqkdr0m0eD91G4H+f/nnsRRkHGkr7QjLOgwx6ZNVfGHoUhTmwIbgma9 7le7yR3gHIjnTfE0qaDK6+oYcgGtd4UradcOgQs7jpCYMnnK/pe/RA7K30FOcf7X9JPm amtt/Qe2duB2ccwXgMU0YfnMoP83WC4Na5s02wd7lrnDYkXFZgrojtKgQGpbgthhHVOk ANmg== X-Gm-Message-State: ABy/qLZXeTK6Af2OVarYPbwSQQt+EpX+8Hi/LXBOBvSamMvY98i2US03 ZpsXF7xkOpY40MZUOhJ1tFg= X-Google-Smtp-Source: APBJJlFwjqDT71kjcQ846O/qXJio+zpLNjUnkVk7WIWJuBm2EPn+WlRZilnXarK9MTJ7moSHHgRFeA== X-Received: by 2002:a05:6830:18c1:b0:6b9:a84a:a393 with SMTP id v1-20020a05683018c100b006b9a84aa393mr160649ote.37.1690488141937; Thu, 27 Jul 2023 13:02:21 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-a360-d7ee-0b00-a1d3.res6.spectrum.com. [2603:8081:140c:1a00:a360:d7ee:b00:a1d3]) by smtp.gmail.com with ESMTPSA id m3-20020a9d73c3000000b006b9acf5ebc0sm938142otk.76.2023.07.27.13.02.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 13:02:21 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org, jhack@hpe.com Cc: Bob Pearson Subject: [PATCH for-next v3 10/10] RDMA/rxe: Enable sg code in rxe Date: Thu, 27 Jul 2023 15:01:29 -0500 Message-Id: <20230727200128.65947-11-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727200128.65947-1-rpearsonhpe@gmail.com> References: <20230727200128.65947-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Make changes to enable sg code in rxe. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 4 ++-- drivers/infiniband/sw/rxe/rxe_req.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 800e8c0d437d..b52dd1704e74 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -14,9 +14,9 @@ MODULE_DESCRIPTION("Soft RDMA transport"); MODULE_LICENSE("Dual BSD/GPL"); /* if true allow using fragmented skbs */ -bool rxe_use_sg; +bool rxe_use_sg = true; module_param_named(use_sg, rxe_use_sg, bool, 0444); -MODULE_PARM_DESC(use_sg, "Support skb frags; default false"); +MODULE_PARM_DESC(use_sg, "Support skb frags; default true"); /* free resources for a rxe device all objects created for this device must * have been destroyed diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index cf34d1a58f85..d00c24e1a569 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -402,7 +402,7 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, struct sk_buff *skb = NULL; struct rxe_av *av; struct rxe_ah *ah = NULL; - bool frag = false; + bool frag; int err; pkt->rxe = rxe; @@ -426,7 +426,7 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, pkt->pad + RXE_ICRC_SIZE; /* init skb */ - skb = rxe_init_packet(qp, av, pkt, NULL); + skb = rxe_init_packet(qp, av, pkt, &frag); if (unlikely(!skb)) { err = -ENOMEM; goto err_out;