From patchwork Mon Jun 19 20:21:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13284930 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DF66EB64DA for ; Mon, 19 Jun 2023 20:22:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229572AbjFSUWD (ORCPT ); Mon, 19 Jun 2023 16:22:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229550AbjFSUWA (ORCPT ); Mon, 19 Jun 2023 16:22:00 -0400 Received: from mail-oa1-x31.google.com (mail-oa1-x31.google.com [IPv6:2001:4860:4864:20::31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5BEDBCC for ; Mon, 19 Jun 2023 13:21:59 -0700 (PDT) Received: by mail-oa1-x31.google.com with SMTP id 586e51a60fabf-1a49716e9c5so3957961fac.1 for ; Mon, 19 Jun 2023 13:21:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687206118; x=1689798118; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tpJ3zW5W+VQqNHMOouMTtI43e8VFcdOhMUHhCH6t4DA=; b=dOms81A8EPsI2L1AZruRbcJXGodkSH7kDn9fIDbgRiXFmjscDBvaA7Pjni6/Vkc05T taSKkGsTJflNI6A3m0yRVtXKKbsZbROA+gN0JyTmaBjI1WQHbVeM0Ci531er+hrdqJJr a+ySoe9DwVR7XsHcxqhBM4d+OSAm3T5VcJexozttng0pYbtB0T2oxbBjnUdZ4Bd1l0Mf tndcLWGWUuAjtz9cCB5UpS4fQ7Y0EBOtobKWeE0IbDKloaJmTuPAaXNOMEi1CEuKiS3t tWf912wRzAKDVR/qlHxEzNiNIOazQa2Bht359xUq4EkIgYW7514iTtI3mgUmMABRAHxe aPcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687206118; x=1689798118; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tpJ3zW5W+VQqNHMOouMTtI43e8VFcdOhMUHhCH6t4DA=; b=Cd9JUBPi4q53RmkTEgb3lZKhstVQj56+pwTtyrR8U03779AmInJ3i+jcMIhtWKFIkL /1ZmmmmkUhX9OZ8M8+rtkej/nn1kInegVIcHOcY+ZEjkm1hl5lK0hMm9EAIUsOGUjIGa 27dJAlzypeTbYdMLOZvxIeEF1ZTImVonZW2nEe7r2GWaIPWHx84fVzHH/OiMoqq2kNy8 bybur42A9f3WkmhcNFb6KSfNqy8N1Qkv0lwXIhWzBvepNAk7nBLxWUf/YX+XQvWztOp/ WsIvb576qr8yXMqpsR8nuXK8voBgIgEjIemdCb08Vm+pqxaUmzh7EQeAFdDkPaPYg1MU 66OA== X-Gm-Message-State: AC+VfDxfmzY5Sr9QKsvBZZVUp4xNmYaCSYYMMZrzlDCoSemHkxJ33pvH D8uonUSIHlopLNz3G6HL0zA= X-Google-Smtp-Source: ACHHUZ4oAPY9J5rMuauoCzgNooV4bs24pzad89Kl+mIvvrkKL/6oH+5+zCJQKTcZarCA8sQgA+OHFg== X-Received: by 2002:a05:6871:556:b0:1aa:30e3:6a6f with SMTP id t22-20020a056871055600b001aa30e36a6fmr2040911oal.50.1687206118603; Mon, 19 Jun 2023 13:21:58 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-773b-851f-3075-b82a.res6.spectrum.com. [2603:8081:140c:1a00:773b:851f:3075:b82a]) by smtp.gmail.com with ESMTPSA id kw41-20020a056870ac2900b001a6a3f99691sm311368oab.27.2023.06.19.13.21.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jun 2023 13:21:58 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 1/3] RDMA/rxe: Move work queue code to subroutines Date: Mon, 19 Jun 2023 15:21:10 -0500 Message-Id: <20230619202110.45680-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230619202110.45680-1-rpearsonhpe@gmail.com> References: <20230619202110.45680-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org This patch: - Moves code to initialize a qp send work queue to a subroutine named rxe_init_sq. - Moves code to initialize a qp recv work queue to a subroutine named rxe_init_rq. - Moves initialization of qp request and response packet queues ahead of work queue initialization so that cleanup of a qp if it is not fully completed can successfully attempt to drain the packet queues without a seg fault. - Makes minor whitespace cleanups. Fixes: 8700e3e7c485 ("Soft RoCE driver") Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_qp.c | 163 +++++++++++++++++++---------- 1 file changed, 108 insertions(+), 55 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 95d4a6760c33..9dbb16699c36 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -180,13 +180,63 @@ static void rxe_qp_init_misc(struct rxe_dev *rxe, struct rxe_qp *qp, atomic_set(&qp->skb_out, 0); } +static int rxe_init_sq(struct rxe_qp *qp, struct ib_qp_init_attr *init, + struct ib_udata *udata, + struct rxe_create_qp_resp __user *uresp) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + int wqe_size; + int err; + + qp->sq.max_wr = init->cap.max_send_wr; + wqe_size = max_t(int, init->cap.max_send_sge * sizeof(struct ib_sge), + init->cap.max_inline_data); + qp->sq.max_sge = wqe_size / sizeof(struct ib_sge); + qp->sq.max_inline = wqe_size; + wqe_size += sizeof(struct rxe_send_wqe); + + qp->sq.queue = rxe_queue_init(rxe, &qp->sq.max_wr, wqe_size, + QUEUE_TYPE_FROM_CLIENT); + if (!qp->sq.queue) { + rxe_err_qp(qp, "Unable to allocate send queue"); + err = -ENOMEM; + goto err_out; + } + + /* prepare info for caller to mmap send queue if user space qp */ + err = do_mmap_info(rxe, uresp ? &uresp->sq_mi : NULL, udata, + qp->sq.queue->buf, qp->sq.queue->buf_size, + &qp->sq.queue->ip); + if (err) { + rxe_err_qp(qp, "do_mmap_info failed, err = %d", err); + goto err_free; + } + + /* return actual capabilities to caller which may be larger + * than requested + */ + init->cap.max_send_wr = qp->sq.max_wr; + init->cap.max_send_sge = qp->sq.max_sge; + init->cap.max_inline_data = qp->sq.max_inline; + + return 0; + +err_free: + vfree(qp->sq.queue->buf); + kfree(qp->sq.queue); + qp->sq.queue = NULL; +err_out: + return err; +} + static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init, struct ib_udata *udata, struct rxe_create_qp_resp __user *uresp) { int err; - int wqe_size; - enum queue_type type; + + /* if we don't finish qp create make sure queue is valid */ + skb_queue_head_init(&qp->req_pkts); err = sock_create_kern(&init_net, AF_INET, SOCK_DGRAM, 0, &qp->sk); if (err < 0) @@ -201,32 +251,10 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, * (0xc000 - 0xffff). */ qp->src_port = RXE_ROCE_V2_SPORT + (hash_32(qp_num(qp), 14) & 0x3fff); - qp->sq.max_wr = init->cap.max_send_wr; - - /* These caps are limited by rxe_qp_chk_cap() done by the caller */ - wqe_size = max_t(int, init->cap.max_send_sge * sizeof(struct ib_sge), - init->cap.max_inline_data); - qp->sq.max_sge = init->cap.max_send_sge = - wqe_size / sizeof(struct ib_sge); - qp->sq.max_inline = init->cap.max_inline_data = wqe_size; - wqe_size += sizeof(struct rxe_send_wqe); - - type = QUEUE_TYPE_FROM_CLIENT; - qp->sq.queue = rxe_queue_init(rxe, &qp->sq.max_wr, - wqe_size, type); - if (!qp->sq.queue) - return -ENOMEM; - err = do_mmap_info(rxe, uresp ? &uresp->sq_mi : NULL, udata, - qp->sq.queue->buf, qp->sq.queue->buf_size, - &qp->sq.queue->ip); - - if (err) { - vfree(qp->sq.queue->buf); - kfree(qp->sq.queue); - qp->sq.queue = NULL; + err = rxe_init_sq(qp, init, udata, uresp); + if (err) return err; - } qp->req.wqe_index = queue_get_producer(qp->sq.queue, QUEUE_TYPE_FROM_CLIENT); @@ -234,8 +262,6 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, qp->req.opcode = -1; qp->comp.opcode = -1; - skb_queue_head_init(&qp->req_pkts); - rxe_init_task(&qp->req.task, qp, rxe_requester); rxe_init_task(&qp->comp.task, qp, rxe_completer); @@ -247,40 +273,67 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, return 0; } +static int rxe_init_rq(struct rxe_qp *qp, struct ib_qp_init_attr *init, + struct ib_udata *udata, + struct rxe_create_qp_resp __user *uresp) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + int wqe_size; + int err; + + qp->rq.max_wr = init->cap.max_recv_wr; + qp->rq.max_sge = init->cap.max_recv_sge; + wqe_size = sizeof(struct rxe_recv_wqe) + + qp->rq.max_sge*sizeof(struct ib_sge); + + qp->rq.queue = rxe_queue_init(rxe, &qp->rq.max_wr, wqe_size, + QUEUE_TYPE_FROM_CLIENT); + if (!qp->rq.queue) { + rxe_err_qp(qp, "Unable to allocate recv queue"); + err = -ENOMEM; + goto err_out; + } + + /* prepare info for caller to mmap recv queue if user space qp */ + err = do_mmap_info(rxe, uresp ? &uresp->rq_mi : NULL, udata, + qp->rq.queue->buf, qp->rq.queue->buf_size, + &qp->rq.queue->ip); + if (err) { + rxe_err_qp(qp, "do_mmap_info failed, err = %d", err); + goto err_free; + } + + /* return actual capabilities to caller which may be larger + * than requested + */ + init->cap.max_recv_wr = qp->rq.max_wr; + + return 0; + +err_free: + vfree(qp->rq.queue->buf); + kfree(qp->rq.queue); + qp->rq.queue = NULL; +err_out: + return err; +} + static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init, struct ib_udata *udata, struct rxe_create_qp_resp __user *uresp) { int err; - int wqe_size; - enum queue_type type; + + /* if we don't finish qp create make sure queue is valid */ + skb_queue_head_init(&qp->resp_pkts); if (!qp->srq) { - qp->rq.max_wr = init->cap.max_recv_wr; - qp->rq.max_sge = init->cap.max_recv_sge; - - wqe_size = rcv_wqe_size(qp->rq.max_sge); - - type = QUEUE_TYPE_FROM_CLIENT; - qp->rq.queue = rxe_queue_init(rxe, &qp->rq.max_wr, - wqe_size, type); - if (!qp->rq.queue) - return -ENOMEM; - - err = do_mmap_info(rxe, uresp ? &uresp->rq_mi : NULL, udata, - qp->rq.queue->buf, qp->rq.queue->buf_size, - &qp->rq.queue->ip); - if (err) { - vfree(qp->rq.queue->buf); - kfree(qp->rq.queue); - qp->rq.queue = NULL; + err = rxe_init_rq(qp, init, udata, uresp); + if (err) return err; - } } - skb_queue_head_init(&qp->resp_pkts); - rxe_init_task(&qp->resp.task, qp, rxe_responder); qp->resp.opcode = OPCODE_NONE; @@ -307,10 +360,10 @@ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, if (srq) rxe_get(srq); - qp->pd = pd; - qp->rcq = rcq; - qp->scq = scq; - qp->srq = srq; + qp->pd = pd; + qp->rcq = rcq; + qp->scq = scq; + qp->srq = srq; atomic_inc(&rcq->num_wq); atomic_inc(&scq->num_wq); From patchwork Mon Jun 19 20:21:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13284931 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60FEAEB64DA for ; Mon, 19 Jun 2023 20:22:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229610AbjFSUWP (ORCPT ); Mon, 19 Jun 2023 16:22:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229537AbjFSUWP (ORCPT ); Mon, 19 Jun 2023 16:22:15 -0400 Received: from mail-oa1-x35.google.com (mail-oa1-x35.google.com [IPv6:2001:4860:4864:20::35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 854719C for ; Mon, 19 Jun 2023 13:22:14 -0700 (PDT) Received: by mail-oa1-x35.google.com with SMTP id 586e51a60fabf-1a9ae7cc01dso2475886fac.3 for ; Mon, 19 Jun 2023 13:22:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687206134; x=1689798134; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/Im+3MK0C7jPJtSSraG4P3xy8ObrSvzftiKSqCw5rH8=; b=EdhTrlTPEx/E3f1OIQ+uHcUpZy8axmcn5zuiVuLP9GRShsjGtvvGB/Inztv14wzgvV DNlwE/fLgZ93FzHuiVg4q9PsLpX39KzQvdoo5s/yrqCXaVHYskTSH2YuwINcvmSg61rC tuYZ0ga0EhqNVbEPKWqA27RYEmP/Ad866gTsZNRFSc/8AEi+ojpPi2JfarABt4vg9vss Ho7zuf4Rdpnb8V68+zfwQTQzEh41Og0Ulug/oDjShO2mbT8LUkSgag0EeulBks4kEUyT hzhJJZip/vACch1re76S8rYC5iKk+yPFO5hIBPo1igY4sAM73u99ZTbN1e9GcMhkYJWt GCmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687206134; x=1689798134; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/Im+3MK0C7jPJtSSraG4P3xy8ObrSvzftiKSqCw5rH8=; b=OocAZZ3sY/5/fymNAfOBKtTvZhzSeBgmejPFnfQ03hH87hU1qtqeFwIJzaYXbBByBF epRNT/WPNw2NkgpKOw9I5Pd7SL4NqNHMhb5R1/PNJ1NCtGJXJDr2+tA04pW1nFXABrxp zZht4oONpe4xzxRm1jzf1seIN7IsI1JM6X9NbdudeCk4pTRde8vp2OMEGRCSpMKnOkE3 Ly5T/Fjf/MxdHINS11TTLlIocXySmLzEpHe49kYKmQWVi+xKGiixzuDTxI3XsDWXVNwT 1sqwNNVs7Wx4bY7bDVdO3ENi6ldnI0TZFhJuXjAN4PGWvRjjuuWjtxGkTwaDDtoMr93W odkA== X-Gm-Message-State: AC+VfDxjfq2VQpZsHKghEmeykfdkJRKhK9TMQTq3OblunuVPUmCEd5Lz xWBDU932a38rUx5ee6VtX3xhxhNNhPU= X-Google-Smtp-Source: ACHHUZ6BSrHDg3nfnCuwjfxtwGnyp8/MQCAuUVYsNuFlZuj+vhfpchEtWNVV3AnpCkcxP7U5+87KKA== X-Received: by 2002:a05:6871:6aa2:b0:1a9:a004:fda8 with SMTP id zf34-20020a0568716aa200b001a9a004fda8mr5201385oab.5.1687206133771; Mon, 19 Jun 2023 13:22:13 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-773b-851f-3075-b82a.res6.spectrum.com. [2603:8081:140c:1a00:773b:851f:3075:b82a]) by smtp.gmail.com with ESMTPSA id kw41-20020a056870ac2900b001a6a3f99691sm311368oab.27.2023.06.19.13.22.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jun 2023 13:22:13 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson , syzbot+2da1965168e7dbcba136@syzkaller.appspotmail.com Subject: [PATCH for-next 2/3] RDMA/rxe: Fix unsafe drain work queue code Date: Mon, 19 Jun 2023 15:21:12 -0500 Message-Id: <20230619202110.45680-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230619202110.45680-1-rpearsonhpe@gmail.com> References: <20230619202110.45680-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org If create_qp does not fully succeed it is possible for qp cleanup code to attempt to drain the send or recv work queues before the queues have been created causing a seg fault. This patch checks to see if the queues exist before attempting to drain them. Reported-by: syzbot+2da1965168e7dbcba136@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-rdma/00000000000012d89205fe7cfe00@google.com/raw Fixes: 49dc9c1f0c7e ("RDMA/rxe: Cleanup reset state handling in rxe_resp.c") Fixes: fbdeb828a21f ("RDMA/rxe: Cleanup error state handling in rxe_comp.c") Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 4 ++++ drivers/infiniband/sw/rxe/rxe_resp.c | 4 ++++ 2 files changed, 8 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index 0c0ae214c3a9..c2a53418a9ce 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -594,6 +594,10 @@ static void flush_send_queue(struct rxe_qp *qp, bool notify) struct rxe_queue *q = qp->sq.queue; int err; + /* send queue never got created. nothing to do. */ + if (!qp->sq.queue) + return; + while ((wqe = queue_head(q, q->type))) { if (notify) { err = flush_send_wqe(qp, wqe); diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 8a3c9c2c5a2d..9b4f95820b40 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -1467,6 +1467,10 @@ static void flush_recv_queue(struct rxe_qp *qp, bool notify) return; } + /* recv queue not created. nothing to do. */ + if (!qp->rq.queue) + return; + while ((wqe = queue_head(q, q->type))) { if (notify) { err = flush_recv_wqe(qp, wqe); From patchwork Mon Jun 19 20:21:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13284932 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE0DAEB64DA for ; Mon, 19 Jun 2023 20:22:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229537AbjFSUWp (ORCPT ); Mon, 19 Jun 2023 16:22:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34434 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229518AbjFSUWp (ORCPT ); Mon, 19 Jun 2023 16:22:45 -0400 Received: from mail-ot1-x333.google.com (mail-ot1-x333.google.com [IPv6:2607:f8b0:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4953EA for ; Mon, 19 Jun 2023 13:22:43 -0700 (PDT) Received: by mail-ot1-x333.google.com with SMTP id 46e09a7af769-6b46e54039eso985006a34.2 for ; Mon, 19 Jun 2023 13:22:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687206163; x=1689798163; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=EN5WQtny+pe3n+EsBMsG+IqxPDZ3KXqWmazYGXqulT4=; b=nQ5eVQz3xygcZ3XVhE/eazVQ+ODuy+MEMBImoiTxf3GXQ1MxPXIAjbXkTp9WYsPrRa 6xhpw1JRjPKyeRl9+ESXz0O4vUe8BMeUdjhjOoUhVatDuABheVg541c0OSfPjdhXXeOO nMz+XdHH6s/5L3m5/5BTQt6lIhNgTq7MYh0G7rOJQobNSZ4fNU6I30Ir9IzAIVq5p31l 1coIY1ycNv3KGmxOPc0IrnwgaB3c9EtEjv/KC23g7L8AsqvQhRfpJsXBsdg4D4zoqBKj ArVPKeqpk+hPLME9nAiQpD00vXYqihHqFHwczDyYis5cetQqbOW/64vhQP2bP7RdNroe txuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687206163; x=1689798163; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EN5WQtny+pe3n+EsBMsG+IqxPDZ3KXqWmazYGXqulT4=; b=IjX58SHvnKhBGQpRzDelDV62vHi9Uky0W6D33MHt5i188AbBYCEVE7t2csW3/kZurk /s98hoiihNREmDP69g4L8fv2Friwun2p28yXLE5DucvSHOIRXKLyDnhynnVNFWXJpMGw dxtCbRjeMay/07IkEWiSxKTFs5LsCr8CmuxrdWUUM9MXeQjnoIMCD0eqg4aT8rJcINiq ylzXLPOf9IBUYA+tCLM7dSFsbY6v9jGKuhsg4XeNqKnbwedI/8BTrFw1Ce+BPUtipDMa xRD5SeiXDA7RYJaXXBg1TZGIMWjeFdcddY1dbZyQsCN9l3eREXGlVCOfnxyVAz8qBLR1 gBMw== X-Gm-Message-State: AC+VfDy/I6WViHdl0Oq5jkEa43FTaJLREhwPWkm5RXykpjSh7R0uo58t 00YsU4UO4/CC2SwVC9yYQrIA/YV/KeI= X-Google-Smtp-Source: ACHHUZ5jU5E0gMOVn8j1m1RSK5MhOkAbpVKD6HxBLIh6q4QMh+3e6kLn77i/K5k+G1tGlJ1QXh6vgw== X-Received: by 2002:a05:6870:7718:b0:17f:f665:bc07 with SMTP id dw24-20020a056870771800b0017ff665bc07mr4737535oab.52.1687206163033; Mon, 19 Jun 2023 13:22:43 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-773b-851f-3075-b82a.res6.spectrum.com. [2603:8081:140c:1a00:773b:851f:3075:b82a]) by smtp.gmail.com with ESMTPSA id kw41-20020a056870ac2900b001a6a3f99691sm311368oab.27.2023.06.19.13.22.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jun 2023 13:22:42 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 3/3] RDMA/rxe: Fix rxe_m-dify_srq Date: Mon, 19 Jun 2023 15:21:14 -0500 Message-Id: <20230619202110.45680-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230619202110.45680-1-rpearsonhpe@gmail.com> References: <20230619202110.45680-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org This patch corrects an error in rxe_modify_srq where if the caller changes the srq size the actual new value is not returned to the caller since it may be larger than what is requested. Additionally it open codes the subroutine rcv_wqe_size() which adds very little value. And makes some whitespace changes. Fixes: 8700e3e7c485 ("Soft RoCE driver") Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 6 ---- drivers/infiniband/sw/rxe/rxe_srq.c | 55 ++++++++++++++++++----------- 2 files changed, 34 insertions(+), 27 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 666e06a82bc9..4d2a8ef52c85 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -136,12 +136,6 @@ static inline int qp_mtu(struct rxe_qp *qp) return IB_MTU_4096; } -static inline int rcv_wqe_size(int max_sge) -{ - return sizeof(struct rxe_recv_wqe) + - max_sge * sizeof(struct ib_sge); -} - void free_rd_atomic_resource(struct resp_res *res); static inline void rxe_advance_resp_resource(struct rxe_qp *qp) diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c index 27ca82ec0826..9fd37936ee5b 100644 --- a/drivers/infiniband/sw/rxe/rxe_srq.c +++ b/drivers/infiniband/sw/rxe/rxe_srq.c @@ -46,27 +46,28 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, struct rxe_create_srq_resp __user *uresp) { int err; - int srq_wqe_size; + int wqe_size; struct rxe_queue *q; - enum queue_type type; - srq->ibsrq.event_handler = init->event_handler; - srq->ibsrq.srq_context = init->srq_context; - srq->limit = init->attr.srq_limit; - srq->srq_num = srq->elem.index; - srq->rq.max_wr = init->attr.max_wr; - srq->rq.max_sge = init->attr.max_sge; + srq->ibsrq.event_handler = init->event_handler; + srq->ibsrq.srq_context = init->srq_context; + srq->limit = init->attr.srq_limit; + srq->srq_num = srq->elem.index; + srq->rq.max_wr = init->attr.max_wr; + srq->rq.max_sge = init->attr.max_sge; - srq_wqe_size = rcv_wqe_size(srq->rq.max_sge); + wqe_size = sizeof(struct rxe_recv_wqe) + + srq->rq.max_sge*sizeof(struct ib_sge); spin_lock_init(&srq->rq.producer_lock); spin_lock_init(&srq->rq.consumer_lock); - type = QUEUE_TYPE_FROM_CLIENT; - q = rxe_queue_init(rxe, &srq->rq.max_wr, srq_wqe_size, type); - if (!q) { + srq->rq.queue = rxe_queue_init(rxe, &srq->rq.max_wr, wqe_size, + QUEUE_TYPE_FROM_CLIENT); + if (!srq->rq.queue) { rxe_dbg_srq(srq, "Unable to allocate queue\n"); - return -ENOMEM; + err = -ENOMEM; + goto err_out; } srq->rq.queue = q; @@ -74,11 +75,12 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, err = do_mmap_info(rxe, uresp ? &uresp->mi : NULL, udata, q->buf, q->buf_size, &q->ip); if (err) { - vfree(q->buf); - kfree(q); - return err; + rxe_dbg_srq(srq, "Unable to init mmap info for caller\n"); + goto err_free; } + init->attr.max_wr = srq->rq.max_wr; + if (uresp) { if (copy_to_user(&uresp->srq_num, &srq->srq_num, sizeof(uresp->srq_num))) { @@ -88,6 +90,12 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, } return 0; + +err_free: + vfree(q->buf); + kfree(q); +err_out: + return err; } int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, @@ -148,6 +156,7 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, int err; struct rxe_queue *q = srq->rq.queue; struct mminfo __user *mi = NULL; + int wqe_size; if (mask & IB_SRQ_MAX_WR) { /* @@ -156,12 +165,16 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, */ mi = u64_to_user_ptr(ucmd->mmap_info_addr); - err = rxe_queue_resize(q, &attr->max_wr, - rcv_wqe_size(srq->rq.max_sge), udata, mi, - &srq->rq.producer_lock, + wqe_size = sizeof(struct rxe_recv_wqe) + + srq->rq.max_sge*sizeof(struct ib_sge); + + err = rxe_queue_resize(q, &attr->max_wr, wqe_size, + udata, mi, &srq->rq.producer_lock, &srq->rq.consumer_lock); if (err) - goto err2; + goto err_free; + + srq->rq.max_wr = attr->max_wr; } if (mask & IB_SRQ_LIMIT) @@ -169,7 +182,7 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, return 0; -err2: +err_free: rxe_queue_cleanup(q); srq->rq.queue = NULL; return err;