From patchwork Tue Jun 20 13:55:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13285935 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D917AEB64D7 for ; Tue, 20 Jun 2023 13:55:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231602AbjFTNzz (ORCPT ); Tue, 20 Jun 2023 09:55:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229471AbjFTNzz (ORCPT ); Tue, 20 Jun 2023 09:55:55 -0400 Received: from mail-oi1-x22d.google.com (mail-oi1-x22d.google.com [IPv6:2607:f8b0:4864:20::22d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 525709D for ; Tue, 20 Jun 2023 06:55:53 -0700 (PDT) Received: by mail-oi1-x22d.google.com with SMTP id 5614622812f47-39ca48cd4c6so3040984b6e.0 for ; Tue, 20 Jun 2023 06:55:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687269352; x=1689861352; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tpJ3zW5W+VQqNHMOouMTtI43e8VFcdOhMUHhCH6t4DA=; b=A7+KjFnVs7+2agbove/t9+Qg3xzkmvFqmWosAwop6A9Nbj7CGRDAywwYKrgvwzc+Wx PLo9Pq9g6MQW2qEuogzuqSTfY71d3ER/6M0Sjyvy5PyjZ/BRjhmwUSyv/LJSHKtSS6CP Hkv171n/uI3oOTdB5Yr9ry0NHY86krPFXVZimAqe7nn/zBM044JU8atQIPk5moDIejNP l1ZN2fcdisRFIdH7vLyooLfr7MppcMB8hBkBoQk/z9/z6VsNOUzGF0LZw9PM018y+5Qf jZ4fXrFdceKA/RrUqLVY9klLz1XwcrUsBmi8zLftNrrhDPAYCgtiF2oNGWjb0uWnP2Vo rpfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687269352; x=1689861352; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tpJ3zW5W+VQqNHMOouMTtI43e8VFcdOhMUHhCH6t4DA=; b=Fkwn4mQ8yRy7+o3aeRUyZpJrjvot7fsFaWk7p3+udP8j3cX4+OvPddJDL6sXSZXqJ1 6V4vkh2/euYMdZlGRe/PB2i9SrGIx70+2jLRkLPnt/BeJ7kT7v/hKkXxHoOfEjOenGox UPJ8j9jA63XMokNGP3Wx0xksDyKW3L/ehTcniEia3I7MwsvrBIMZhk9myv1QncGQxfEV hcqslZ7Yb838xFgpHpKjwYsiJnRgj3Azs8NQ7WGQa3e7+AOW4u8As8gVNqvg83eVTj1J xph5a50bAF/LsafVKnCMnMqTpQ9OfimuP1QVMiN6lS6EwcEU5hNPI6rjQmGc0FXkJjsu Ow+Q== X-Gm-Message-State: AC+VfDxNQ/eIu/N/CxSgnUXvEhBVIUra2QmiYuqUS3fBseh4dvZl+1nc 7GpBvFAYScoy2vidOsObBGI= X-Google-Smtp-Source: ACHHUZ4to82aqlaGvdSZTaCFceJFAHsQ++nLf9qI5x3NXVFvOeKiNvun+D9xk4d63tTC13X9wSU87w== X-Received: by 2002:a05:6808:1456:b0:3a0:3374:a03d with SMTP id x22-20020a056808145600b003a03374a03dmr4290497oiv.35.1687269352449; Tue, 20 Jun 2023 06:55:52 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-ba53-355d-2a89-4598.res6.spectrum.com. [2603:8081:140c:1a00:ba53:355d:2a89:4598]) by smtp.gmail.com with ESMTPSA id bm9-20020a0568081a8900b003a03481094csm1091010oib.19.2023.06.20.06.55.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Jun 2023 06:55:52 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 1/3] RDMA/rxe: Move work queue code to subroutines Date: Tue, 20 Jun 2023 08:55:19 -0500 Message-Id: <20230620135519.9365-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230620135519.9365-1-rpearsonhpe@gmail.com> References: <20230620135519.9365-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org This patch: - Moves code to initialize a qp send work queue to a subroutine named rxe_init_sq. - Moves code to initialize a qp recv work queue to a subroutine named rxe_init_rq. - Moves initialization of qp request and response packet queues ahead of work queue initialization so that cleanup of a qp if it is not fully completed can successfully attempt to drain the packet queues without a seg fault. - Makes minor whitespace cleanups. Fixes: 8700e3e7c485 ("Soft RoCE driver") Signed-off-by: Bob Pearson Acked-by: Zhu Yanjun --- drivers/infiniband/sw/rxe/rxe_qp.c | 163 +++++++++++++++++++---------- 1 file changed, 108 insertions(+), 55 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 95d4a6760c33..9dbb16699c36 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -180,13 +180,63 @@ static void rxe_qp_init_misc(struct rxe_dev *rxe, struct rxe_qp *qp, atomic_set(&qp->skb_out, 0); } +static int rxe_init_sq(struct rxe_qp *qp, struct ib_qp_init_attr *init, + struct ib_udata *udata, + struct rxe_create_qp_resp __user *uresp) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + int wqe_size; + int err; + + qp->sq.max_wr = init->cap.max_send_wr; + wqe_size = max_t(int, init->cap.max_send_sge * sizeof(struct ib_sge), + init->cap.max_inline_data); + qp->sq.max_sge = wqe_size / sizeof(struct ib_sge); + qp->sq.max_inline = wqe_size; + wqe_size += sizeof(struct rxe_send_wqe); + + qp->sq.queue = rxe_queue_init(rxe, &qp->sq.max_wr, wqe_size, + QUEUE_TYPE_FROM_CLIENT); + if (!qp->sq.queue) { + rxe_err_qp(qp, "Unable to allocate send queue"); + err = -ENOMEM; + goto err_out; + } + + /* prepare info for caller to mmap send queue if user space qp */ + err = do_mmap_info(rxe, uresp ? &uresp->sq_mi : NULL, udata, + qp->sq.queue->buf, qp->sq.queue->buf_size, + &qp->sq.queue->ip); + if (err) { + rxe_err_qp(qp, "do_mmap_info failed, err = %d", err); + goto err_free; + } + + /* return actual capabilities to caller which may be larger + * than requested + */ + init->cap.max_send_wr = qp->sq.max_wr; + init->cap.max_send_sge = qp->sq.max_sge; + init->cap.max_inline_data = qp->sq.max_inline; + + return 0; + +err_free: + vfree(qp->sq.queue->buf); + kfree(qp->sq.queue); + qp->sq.queue = NULL; +err_out: + return err; +} + static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init, struct ib_udata *udata, struct rxe_create_qp_resp __user *uresp) { int err; - int wqe_size; - enum queue_type type; + + /* if we don't finish qp create make sure queue is valid */ + skb_queue_head_init(&qp->req_pkts); err = sock_create_kern(&init_net, AF_INET, SOCK_DGRAM, 0, &qp->sk); if (err < 0) @@ -201,32 +251,10 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, * (0xc000 - 0xffff). */ qp->src_port = RXE_ROCE_V2_SPORT + (hash_32(qp_num(qp), 14) & 0x3fff); - qp->sq.max_wr = init->cap.max_send_wr; - - /* These caps are limited by rxe_qp_chk_cap() done by the caller */ - wqe_size = max_t(int, init->cap.max_send_sge * sizeof(struct ib_sge), - init->cap.max_inline_data); - qp->sq.max_sge = init->cap.max_send_sge = - wqe_size / sizeof(struct ib_sge); - qp->sq.max_inline = init->cap.max_inline_data = wqe_size; - wqe_size += sizeof(struct rxe_send_wqe); - - type = QUEUE_TYPE_FROM_CLIENT; - qp->sq.queue = rxe_queue_init(rxe, &qp->sq.max_wr, - wqe_size, type); - if (!qp->sq.queue) - return -ENOMEM; - err = do_mmap_info(rxe, uresp ? &uresp->sq_mi : NULL, udata, - qp->sq.queue->buf, qp->sq.queue->buf_size, - &qp->sq.queue->ip); - - if (err) { - vfree(qp->sq.queue->buf); - kfree(qp->sq.queue); - qp->sq.queue = NULL; + err = rxe_init_sq(qp, init, udata, uresp); + if (err) return err; - } qp->req.wqe_index = queue_get_producer(qp->sq.queue, QUEUE_TYPE_FROM_CLIENT); @@ -234,8 +262,6 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, qp->req.opcode = -1; qp->comp.opcode = -1; - skb_queue_head_init(&qp->req_pkts); - rxe_init_task(&qp->req.task, qp, rxe_requester); rxe_init_task(&qp->comp.task, qp, rxe_completer); @@ -247,40 +273,67 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, return 0; } +static int rxe_init_rq(struct rxe_qp *qp, struct ib_qp_init_attr *init, + struct ib_udata *udata, + struct rxe_create_qp_resp __user *uresp) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + int wqe_size; + int err; + + qp->rq.max_wr = init->cap.max_recv_wr; + qp->rq.max_sge = init->cap.max_recv_sge; + wqe_size = sizeof(struct rxe_recv_wqe) + + qp->rq.max_sge*sizeof(struct ib_sge); + + qp->rq.queue = rxe_queue_init(rxe, &qp->rq.max_wr, wqe_size, + QUEUE_TYPE_FROM_CLIENT); + if (!qp->rq.queue) { + rxe_err_qp(qp, "Unable to allocate recv queue"); + err = -ENOMEM; + goto err_out; + } + + /* prepare info for caller to mmap recv queue if user space qp */ + err = do_mmap_info(rxe, uresp ? &uresp->rq_mi : NULL, udata, + qp->rq.queue->buf, qp->rq.queue->buf_size, + &qp->rq.queue->ip); + if (err) { + rxe_err_qp(qp, "do_mmap_info failed, err = %d", err); + goto err_free; + } + + /* return actual capabilities to caller which may be larger + * than requested + */ + init->cap.max_recv_wr = qp->rq.max_wr; + + return 0; + +err_free: + vfree(qp->rq.queue->buf); + kfree(qp->rq.queue); + qp->rq.queue = NULL; +err_out: + return err; +} + static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init, struct ib_udata *udata, struct rxe_create_qp_resp __user *uresp) { int err; - int wqe_size; - enum queue_type type; + + /* if we don't finish qp create make sure queue is valid */ + skb_queue_head_init(&qp->resp_pkts); if (!qp->srq) { - qp->rq.max_wr = init->cap.max_recv_wr; - qp->rq.max_sge = init->cap.max_recv_sge; - - wqe_size = rcv_wqe_size(qp->rq.max_sge); - - type = QUEUE_TYPE_FROM_CLIENT; - qp->rq.queue = rxe_queue_init(rxe, &qp->rq.max_wr, - wqe_size, type); - if (!qp->rq.queue) - return -ENOMEM; - - err = do_mmap_info(rxe, uresp ? &uresp->rq_mi : NULL, udata, - qp->rq.queue->buf, qp->rq.queue->buf_size, - &qp->rq.queue->ip); - if (err) { - vfree(qp->rq.queue->buf); - kfree(qp->rq.queue); - qp->rq.queue = NULL; + err = rxe_init_rq(qp, init, udata, uresp); + if (err) return err; - } } - skb_queue_head_init(&qp->resp_pkts); - rxe_init_task(&qp->resp.task, qp, rxe_responder); qp->resp.opcode = OPCODE_NONE; @@ -307,10 +360,10 @@ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, if (srq) rxe_get(srq); - qp->pd = pd; - qp->rcq = rcq; - qp->scq = scq; - qp->srq = srq; + qp->pd = pd; + qp->rcq = rcq; + qp->scq = scq; + qp->srq = srq; atomic_inc(&rcq->num_wq); atomic_inc(&scq->num_wq); From patchwork Tue Jun 20 13:55:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13285936 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8363EB64D7 for ; Tue, 20 Jun 2023 13:56:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229471AbjFTN4E (ORCPT ); Tue, 20 Jun 2023 09:56:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53170 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231486AbjFTN4E (ORCPT ); Tue, 20 Jun 2023 09:56:04 -0400 Received: from mail-oo1-xc2e.google.com (mail-oo1-xc2e.google.com [IPv6:2607:f8b0:4864:20::c2e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A1FE7D7 for ; Tue, 20 Jun 2023 06:56:02 -0700 (PDT) Received: by mail-oo1-xc2e.google.com with SMTP id 006d021491bc7-55e34b2bb03so2081468eaf.0 for ; Tue, 20 Jun 2023 06:56:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687269362; x=1689861362; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/Im+3MK0C7jPJtSSraG4P3xy8ObrSvzftiKSqCw5rH8=; b=G3im3x5t8zZ0wiOB4fCu+ygeMy463KhofnusA8de74Z7aub96iHBKDRNwdqSmmHPA/ /wz5ifVu37Anmc+7e3wcgn82nM9uAwu04fFaWVeOWLf8DES5Q1qzZRccFfwG51eM2qNN erx90fq+b/SQ8lDfEOllnKUiuu4wUx22kncllgpuC7RHI/FsO7/zwWtEk9AuUODshHPA cKfdrOfiCAeFNHoCLtpmwWHODII63h4NL/JTBsLSk0Qa+9VZpAPQ2WXAly+9DMHg3hEE 85pNSJClwkYna+2nBhZGenNcFc6XxDuk6/Em5C7JLAS12ODAxFIfSmdh44YoWzhUDA+w /7kA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687269362; x=1689861362; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/Im+3MK0C7jPJtSSraG4P3xy8ObrSvzftiKSqCw5rH8=; b=jWYBimSelLDfC41XiD/hnRHOmR4ffdOdjzBdH6IW5DqNEryScwkPkQqYF8AVfD40KB +GOeqBjG5xpJ6AhCaJCmPvdu5jSTuAhwth+h5d884rWfIeg+0roW3VYOTTvYI06TyiEc 8JYqN+5pL3pcr+zOpwVVb1tlufkQ52qnMWTQz4xCnytWXvN0u8enZFdP+ZiPJSlkal/G Q3BBuVqulZtxarcj3hMuZzyZrj/FwEgGRuXzt4H56l5YL2ubhW/mCzd+5PH8SmTrT1uV bxwj4XSzBYFX1etPYdekgA5ijJrpxI2ca9s/CV0IiKucA5pf3vxm8u6C3dfRPN9nKC9o fkQQ== X-Gm-Message-State: AC+VfDylNZKxF2A5q+nzcIruy69SbDLkcszsYBY9O3f5x7XAqAJKTRyC BSK1gpwB527R1jAlP5XA7qG46arUdDI= X-Google-Smtp-Source: ACHHUZ6PZ8hRKpwD/B4nE7pQxc4fzr37vBZA5Yo3HRpFS1niphgkLthNB/1hscPfG83MLh9/3G/I7Q== X-Received: by 2002:a4a:e2da:0:b0:55e:45ce:f272 with SMTP id l26-20020a4ae2da000000b0055e45cef272mr5976160oot.7.1687269361854; Tue, 20 Jun 2023 06:56:01 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-ba53-355d-2a89-4598.res6.spectrum.com. [2603:8081:140c:1a00:ba53:355d:2a89:4598]) by smtp.gmail.com with ESMTPSA id bm9-20020a0568081a8900b003a03481094csm1091010oib.19.2023.06.20.06.56.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Jun 2023 06:56:01 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson , syzbot+2da1965168e7dbcba136@syzkaller.appspotmail.com Subject: [PATCH for-next v2 2/3] RDMA/rxe: Fix unsafe drain work queue code Date: Tue, 20 Jun 2023 08:55:21 -0500 Message-Id: <20230620135519.9365-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230620135519.9365-1-rpearsonhpe@gmail.com> References: <20230620135519.9365-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org If create_qp does not fully succeed it is possible for qp cleanup code to attempt to drain the send or recv work queues before the queues have been created causing a seg fault. This patch checks to see if the queues exist before attempting to drain them. Reported-by: syzbot+2da1965168e7dbcba136@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-rdma/00000000000012d89205fe7cfe00@google.com/raw Fixes: 49dc9c1f0c7e ("RDMA/rxe: Cleanup reset state handling in rxe_resp.c") Fixes: fbdeb828a21f ("RDMA/rxe: Cleanup error state handling in rxe_comp.c") Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 4 ++++ drivers/infiniband/sw/rxe/rxe_resp.c | 4 ++++ 2 files changed, 8 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index 0c0ae214c3a9..c2a53418a9ce 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -594,6 +594,10 @@ static void flush_send_queue(struct rxe_qp *qp, bool notify) struct rxe_queue *q = qp->sq.queue; int err; + /* send queue never got created. nothing to do. */ + if (!qp->sq.queue) + return; + while ((wqe = queue_head(q, q->type))) { if (notify) { err = flush_send_wqe(qp, wqe); diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 8a3c9c2c5a2d..9b4f95820b40 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -1467,6 +1467,10 @@ static void flush_recv_queue(struct rxe_qp *qp, bool notify) return; } + /* recv queue not created. nothing to do. */ + if (!qp->rq.queue) + return; + while ((wqe = queue_head(q, q->type))) { if (notify) { err = flush_recv_wqe(qp, wqe); From patchwork Tue Jun 20 14:01:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13285955 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B4FAEB64D8 for ; Tue, 20 Jun 2023 14:02:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232799AbjFTOCO (ORCPT ); Tue, 20 Jun 2023 10:02:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232845AbjFTOCL (ORCPT ); Tue, 20 Jun 2023 10:02:11 -0400 Received: from mail-oi1-x22f.google.com (mail-oi1-x22f.google.com [IPv6:2607:f8b0:4864:20::22f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29E38E60 for ; Tue, 20 Jun 2023 07:02:10 -0700 (PDT) Received: by mail-oi1-x22f.google.com with SMTP id 5614622812f47-39e8a7701f0so3019091b6e.3 for ; Tue, 20 Jun 2023 07:02:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687269729; x=1689861729; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=JJ6xu//YvILFSp2a61oN6DYECvJ4nMD040QqC2Jl6fU=; b=fsuM2aEX9GkrD51iVQ7+hSEenDGOpg5IHd+gN9/s4Ab3L7gKurM/IgiLi97sgUEL5o dhz+PG15wIYcC0U1iRFoViSOPquY1o+zCSjqd7DtDisHQz6JIdu2gCwGTkRiPomnIp7v PPdfkQvN+5GPVdvmu8TCYNMFgA++4veKcxM7cwvXYa0Vt6HZAjfQ552wWKWppyursdPY sjO/ihGJH9Ren2merosgsCXBs1MFdqkLQaRDh2tgcY4SWBe0RSkANvWDJeQFB/YXaEl6 E6RP+61mS6JYSbwJkw/VSKLNYL3EySoe7d1IZszpOZkU9wGpN/HBqka2aOFjJGXUxp+c 6/Bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687269729; x=1689861729; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=JJ6xu//YvILFSp2a61oN6DYECvJ4nMD040QqC2Jl6fU=; b=Khhaf5y3DcmjNxrhey7ucrAYKlepnFV8eEeDms/PFSpJzb6rZwQLh+TJAYKeCucEeG 4XqB6/cBcCaZq26AryC4NRKYaJzO+5CR7cd5CAyyDjQU3xDUJbEtvca1n13npaQWjF1w pvOpWX2lv/tcMPB7q58FmchGpz8QEfi1mTs54Q19Wmu/uAJxPvqGWSwXjC3/e9VSOvD5 dI22EjYhZKbUbuLABiBPPc4mQa/oicvDxK3Ogf/MDEfvnG6fYx2lFUNI1xgkVZJ2/Xto mugJVk/FdSfk8EcKoCdMlh5B20VHwYv/BJ3vLVX4QMsx0Ul6GgVS4gNOh1NmX+Z99wFn IXRQ== X-Gm-Message-State: AC+VfDwpIRpfOuUnz/1KetNG7HaN1yQCWHIvCW6Xc1xvzR+5GYsh9NPm FwHHHGvpjAMEDZgA8QOJ2FhpaqpSVac= X-Google-Smtp-Source: ACHHUZ5uOIPT5cDhWV6D3tMAGbH2ylKhk9PINlbVIB2JirhVazza7XdYZfXB2Fg3AyWvZvs7D+wWrg== X-Received: by 2002:a05:6808:120c:b0:39e:bcdc:4bd7 with SMTP id a12-20020a056808120c00b0039ebcdc4bd7mr7782811oil.24.1687269729364; Tue, 20 Jun 2023 07:02:09 -0700 (PDT) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-ba53-355d-2a89-4598.res6.spectrum.com. [2603:8081:140c:1a00:ba53:355d:2a89:4598]) by smtp.gmail.com with ESMTPSA id a8-20020a056808128800b0039ed142bf3asm1062550oiw.53.2023.06.20.07.02.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Jun 2023 07:02:08 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson , lkp@intel.com Subject: [PATCH for-next v2 3/3] RDMA/rxe: Fix rxe_modify_srq Date: Tue, 20 Jun 2023 09:01:43 -0500 Message-Id: <20230620140142.9452-1-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org This patch corrects an error in rxe_modify_srq where if the caller changes the srq size the actual new value is not returned to the caller since it may be larger than what is requested. Additionally it open codes the subroutine rcv_wqe_size() which adds very little value, and makes some whitespace changes. Fixes: 8700e3e7c485 ("Soft RoCE driver") Signed-off-by: Bob Pearson --- v2: Reverted an incorrect change in v1 which partially removed the variable q in rxe_srq_from_init(). Fixed a typo in the subject line. Reported-by: lkp@intel.com Link: https://lore.kernel.org/linux-rdma/202306201807.sCYZpuDH-lkp@intel.com/raw drivers/infiniband/sw/rxe/rxe_loc.h | 6 --- drivers/infiniband/sw/rxe/rxe_srq.c | 60 +++++++++++++++++------------ 2 files changed, 36 insertions(+), 30 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 666e06a82bc9..4d2a8ef52c85 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -136,12 +136,6 @@ static inline int qp_mtu(struct rxe_qp *qp) return IB_MTU_4096; } -static inline int rcv_wqe_size(int max_sge) -{ - return sizeof(struct rxe_recv_wqe) + - max_sge * sizeof(struct ib_sge); -} - void free_rd_atomic_resource(struct resp_res *res); static inline void rxe_advance_resp_resource(struct rxe_qp *qp) diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c index 27ca82ec0826..3661cb627d28 100644 --- a/drivers/infiniband/sw/rxe/rxe_srq.c +++ b/drivers/infiniband/sw/rxe/rxe_srq.c @@ -45,40 +45,41 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_init_attr *init, struct ib_udata *udata, struct rxe_create_srq_resp __user *uresp) { - int err; - int srq_wqe_size; struct rxe_queue *q; - enum queue_type type; + int wqe_size; + int err; - srq->ibsrq.event_handler = init->event_handler; - srq->ibsrq.srq_context = init->srq_context; - srq->limit = init->attr.srq_limit; - srq->srq_num = srq->elem.index; - srq->rq.max_wr = init->attr.max_wr; - srq->rq.max_sge = init->attr.max_sge; + srq->ibsrq.event_handler = init->event_handler; + srq->ibsrq.srq_context = init->srq_context; + srq->limit = init->attr.srq_limit; + srq->srq_num = srq->elem.index; + srq->rq.max_wr = init->attr.max_wr; + srq->rq.max_sge = init->attr.max_sge; - srq_wqe_size = rcv_wqe_size(srq->rq.max_sge); + wqe_size = sizeof(struct rxe_recv_wqe) + + srq->rq.max_sge*sizeof(struct ib_sge); spin_lock_init(&srq->rq.producer_lock); spin_lock_init(&srq->rq.consumer_lock); - type = QUEUE_TYPE_FROM_CLIENT; - q = rxe_queue_init(rxe, &srq->rq.max_wr, srq_wqe_size, type); + q = rxe_queue_init(rxe, &srq->rq.max_wr, wqe_size, + QUEUE_TYPE_FROM_CLIENT); if (!q) { rxe_dbg_srq(srq, "Unable to allocate queue\n"); - return -ENOMEM; + err = -ENOMEM; + goto err_out; } - srq->rq.queue = q; - err = do_mmap_info(rxe, uresp ? &uresp->mi : NULL, udata, q->buf, q->buf_size, &q->ip); if (err) { - vfree(q->buf); - kfree(q); - return err; + rxe_dbg_srq(srq, "Unable to init mmap info for caller\n"); + goto err_free; } + srq->rq.queue = q; + init->attr.max_wr = srq->rq.max_wr; + if (uresp) { if (copy_to_user(&uresp->srq_num, &srq->srq_num, sizeof(uresp->srq_num))) { @@ -88,6 +89,12 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, } return 0; + +err_free: + vfree(q->buf); + kfree(q); +err_out: + return err; } int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, @@ -145,9 +152,10 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_attr *attr, enum ib_srq_attr_mask mask, struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata) { - int err; struct rxe_queue *q = srq->rq.queue; struct mminfo __user *mi = NULL; + int wqe_size; + int err; if (mask & IB_SRQ_MAX_WR) { /* @@ -156,12 +164,16 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, */ mi = u64_to_user_ptr(ucmd->mmap_info_addr); - err = rxe_queue_resize(q, &attr->max_wr, - rcv_wqe_size(srq->rq.max_sge), udata, mi, - &srq->rq.producer_lock, + wqe_size = sizeof(struct rxe_recv_wqe) + + srq->rq.max_sge*sizeof(struct ib_sge); + + err = rxe_queue_resize(q, &attr->max_wr, wqe_size, + udata, mi, &srq->rq.producer_lock, &srq->rq.consumer_lock); if (err) - goto err2; + goto err_free; + + srq->rq.max_wr = attr->max_wr; } if (mask & IB_SRQ_LIMIT) @@ -169,7 +181,7 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, return 0; -err2: +err_free: rxe_queue_cleanup(q); srq->rq.queue = NULL; return err;