From patchwork Mon Jun 21 05:53:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinpu Wang X-Patchwork-Id: 12333839 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A96FC49EA2 for ; Mon, 21 Jun 2021 05:53:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 211F861156 for ; Mon, 21 Jun 2021 05:53:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229789AbhFUFz7 (ORCPT ); Mon, 21 Jun 2021 01:55:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229618AbhFUFz7 (ORCPT ); Mon, 21 Jun 2021 01:55:59 -0400 Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com [IPv6:2a00:1450:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 206CBC061574 for ; Sun, 20 Jun 2021 22:53:45 -0700 (PDT) Received: by mail-ej1-x62f.google.com with SMTP id nd37so26745490ejc.3 for ; Sun, 20 Jun 2021 22:53:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ionos.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xYxafjIVzXlJl3fOIPh2Q21Yf9Gy3QZdGmgKJuNrasI=; b=c8+EaB5XoaF2t9H67Eyct3ssYwEEek4voJF5dp4vkrEMQBiBg+v3xFCYLeLUaM2Mfj tJAJcNXAg8/icFFCDh/FSJ1KQQqY2fsZilBbdRWghy4xayGA7rAVA5tEpgELCuX6NnKb 4Mw2MBvCal9PbBcekG5lpJjjY4AM0qX0Lz0oTrH+kH4TlyXIyqNGQfc5+z49NFjF1ADf LGOWfWrA32nZkVUxn4moxnRfNVMzkl6oUBUrwz9lvk+aYWBzxjRVQrhgvjRAghuIUdny 5lGZiMXKDEF5uGSF8X5BffbttioJsXZG34MjYCcxt4jL3cUs52qHXWtSNCWuxoKOBrDs /LQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xYxafjIVzXlJl3fOIPh2Q21Yf9Gy3QZdGmgKJuNrasI=; b=Lu35Rjym8xCW4VZ++QREbH25yc/gTrGGlomJUeAh3zmiNkcKV1nAyTeNGIiEkuevy8 bWOtpEcte4HLa6MJzjtkS3fhLuBiTNds91+pJpXbXGh0Qxc+96heW6NCPgHqnyW4w0ap /sGqGLc07tWgIW6LIkCKvLoedC/QTm4nixNI8eeahAhcm/UYWtI3AjiWYkPNzKf+xqFG TzcxhMzwNqE1pCsaHawzcVfeWmh3mBpCd54SDx/tmrzpvkfyvFaxkSyQQca2FP606sNH AyyhIn1hXW9K591bh3F3HOx5hAO7GmRSw1BZNkFixSdoH9fQL0TN3Cp7Znu9JHHltX0Q hQAw== X-Gm-Message-State: AOAM530euTwCzeL+0bI9bGQWoPI8B2jQJO60v9mku+UoXcuP/dcBwzFp /ZEbwkUTGRwSQ2UpQ5WwBTjaRyYyXdCLOg== X-Google-Smtp-Source: ABdhPJzqP0kW9F1CQRsK/OE2YRbNdD/w+yVTAmacpHy/+JoFttL2RS+cfi9UUveGRu3quXlAS51EjA== X-Received: by 2002:a17:907:c02:: with SMTP id ga2mr22800168ejc.215.1624254823630; Sun, 20 Jun 2021 22:53:43 -0700 (PDT) Received: from jwang-Latitude-5491.fritz.box ([2001:16b8:49f3:be00:dc22:f90e:1d6c:a47]) by smtp.gmail.com with ESMTPSA id i18sm1919617edc.7.2021.06.20.22.53.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 20 Jun 2021 22:53:43 -0700 (PDT) From: Jack Wang To: linux-rdma@vger.kernel.org Cc: bvanassche@acm.org, leon@kernel.org, dledford@redhat.com, jgg@ziepe.ca, axboe@kernel.dk, haris.iqbal@ionos.com, jinpu.wang@ionos.com, Jack Wang , Md Haris Iqbal Subject: [PATCH resend for-next 1/5] RDMA/rtrs: Introduce head/tail wr Date: Mon, 21 Jun 2021 07:53:36 +0200 Message-Id: <20210621055340.11789-2-jinpu.wang@ionos.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210621055340.11789-1-jinpu.wang@ionos.com> References: <20210621055340.11789-1-jinpu.wang@ionos.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Jack Wang Introduce tail wr, we can send as the last wr, we want to send the local invalidate wr after rdma wr in later patch. While at it, also fix coding style issue. Signed-off-by: Jack Wang Reviewed-by: Md Haris Iqbal --- drivers/infiniband/ulp/rtrs/rtrs-clt.c | 16 ++++++++------- drivers/infiniband/ulp/rtrs/rtrs-pri.h | 3 ++- drivers/infiniband/ulp/rtrs/rtrs.c | 28 +++++++++++++++----------- 3 files changed, 27 insertions(+), 20 deletions(-) diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c index 125e0bead262..6b078e0df1fd 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c @@ -480,7 +480,7 @@ static int rtrs_post_send_rdma(struct rtrs_clt_con *con, return rtrs_iu_post_rdma_write_imm(&con->c, req->iu, &sge, 1, rbuf->rkey, rbuf->addr + off, - imm, flags, wr); + imm, flags, wr, NULL); } static void process_io_rsp(struct rtrs_clt_sess *sess, u32 msg_id, @@ -999,9 +999,10 @@ rtrs_clt_get_copy_req(struct rtrs_clt_sess *alive_sess, } static int rtrs_post_rdma_write_sg(struct rtrs_clt_con *con, - struct rtrs_clt_io_req *req, - struct rtrs_rbuf *rbuf, - u32 size, u32 imm) + struct rtrs_clt_io_req *req, + struct rtrs_rbuf *rbuf, + u32 size, u32 imm, struct ib_send_wr *wr, + struct ib_send_wr *tail) { struct rtrs_clt_sess *sess = to_clt_sess(con->c.sess); struct ib_sge *sge = req->sge; @@ -1009,6 +1010,7 @@ static int rtrs_post_rdma_write_sg(struct rtrs_clt_con *con, struct scatterlist *sg; size_t num_sge; int i; + struct ib_send_wr *ptail = NULL; for_each_sg(req->sglist, sg, req->sg_cnt, i) { sge[i].addr = sg_dma_address(sg); @@ -1033,7 +1035,7 @@ static int rtrs_post_rdma_write_sg(struct rtrs_clt_con *con, return rtrs_iu_post_rdma_write_imm(&con->c, req->iu, sge, num_sge, rbuf->rkey, rbuf->addr, imm, - flags, NULL); + flags, wr, ptail); } static int rtrs_clt_write_req(struct rtrs_clt_io_req *req) @@ -1081,8 +1083,8 @@ static int rtrs_clt_write_req(struct rtrs_clt_io_req *req) rtrs_clt_update_all_stats(req, WRITE); ret = rtrs_post_rdma_write_sg(req->con, req, rbuf, - req->usr_len + sizeof(*msg), - imm); + req->usr_len + sizeof(*msg), + imm, NULL, NULL); if (unlikely(ret)) { rtrs_err_rl(s, "Write request failed: error=%d path=%s [%s:%u]\n", diff --git a/drivers/infiniband/ulp/rtrs/rtrs-pri.h b/drivers/infiniband/ulp/rtrs/rtrs-pri.h index 76cca2058f6f..36f184a3b676 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-pri.h +++ b/drivers/infiniband/ulp/rtrs/rtrs-pri.h @@ -305,7 +305,8 @@ int rtrs_iu_post_rdma_write_imm(struct rtrs_con *con, struct rtrs_iu *iu, struct ib_sge *sge, unsigned int num_sge, u32 rkey, u64 rdma_addr, u32 imm_data, enum ib_send_flags flags, - struct ib_send_wr *head); + struct ib_send_wr *head, + struct ib_send_wr *tail); int rtrs_post_recv_empty(struct rtrs_con *con, struct ib_cqe *cqe); int rtrs_post_rdma_write_imm_empty(struct rtrs_con *con, struct ib_cqe *cqe, diff --git a/drivers/infiniband/ulp/rtrs/rtrs.c b/drivers/infiniband/ulp/rtrs/rtrs.c index 08e1f7d82c95..61919ebd92b2 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs.c +++ b/drivers/infiniband/ulp/rtrs/rtrs.c @@ -105,18 +105,21 @@ int rtrs_post_recv_empty(struct rtrs_con *con, struct ib_cqe *cqe) EXPORT_SYMBOL_GPL(rtrs_post_recv_empty); static int rtrs_post_send(struct ib_qp *qp, struct ib_send_wr *head, - struct ib_send_wr *wr) + struct ib_send_wr *wr, struct ib_send_wr *tail) { if (head) { - struct ib_send_wr *tail = head; + struct ib_send_wr *next = head; - while (tail->next) - tail = tail->next; - tail->next = wr; + while (next->next) + next = next->next; + next->next = wr; } else { head = wr; } + if (tail) + wr->next = tail; + return ib_post_send(qp, head, NULL); } @@ -142,15 +145,16 @@ int rtrs_iu_post_send(struct rtrs_con *con, struct rtrs_iu *iu, size_t size, .send_flags = IB_SEND_SIGNALED, }; - return rtrs_post_send(con->qp, head, &wr); + return rtrs_post_send(con->qp, head, &wr, NULL); } EXPORT_SYMBOL_GPL(rtrs_iu_post_send); int rtrs_iu_post_rdma_write_imm(struct rtrs_con *con, struct rtrs_iu *iu, - struct ib_sge *sge, unsigned int num_sge, - u32 rkey, u64 rdma_addr, u32 imm_data, - enum ib_send_flags flags, - struct ib_send_wr *head) + struct ib_sge *sge, unsigned int num_sge, + u32 rkey, u64 rdma_addr, u32 imm_data, + enum ib_send_flags flags, + struct ib_send_wr *head, + struct ib_send_wr *tail) { struct ib_rdma_wr wr; int i; @@ -174,7 +178,7 @@ int rtrs_iu_post_rdma_write_imm(struct rtrs_con *con, struct rtrs_iu *iu, if (WARN_ON(sge[i].length == 0)) return -EINVAL; - return rtrs_post_send(con->qp, head, &wr.wr); + return rtrs_post_send(con->qp, head, &wr.wr, tail); } EXPORT_SYMBOL_GPL(rtrs_iu_post_rdma_write_imm); @@ -191,7 +195,7 @@ int rtrs_post_rdma_write_imm_empty(struct rtrs_con *con, struct ib_cqe *cqe, .wr.ex.imm_data = cpu_to_be32(imm_data), }; - return rtrs_post_send(con->qp, head, &wr.wr); + return rtrs_post_send(con->qp, head, &wr.wr, NULL); } EXPORT_SYMBOL_GPL(rtrs_post_rdma_write_imm_empty); From patchwork Mon Jun 21 05:53:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinpu Wang X-Patchwork-Id: 12333841 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB47DC49EA3 for ; Mon, 21 Jun 2021 05:53:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9C08B61156 for ; Mon, 21 Jun 2021 05:53:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229479AbhFUF4A (ORCPT ); Mon, 21 Jun 2021 01:56:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229618AbhFUF4A (ORCPT ); Mon, 21 Jun 2021 01:56:00 -0400 Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com [IPv6:2a00:1450:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E397C061574 for ; Sun, 20 Jun 2021 22:53:46 -0700 (PDT) Received: by mail-ej1-x62f.google.com with SMTP id he7so26654134ejc.13 for ; Sun, 20 Jun 2021 22:53:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ionos.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=q5bXPaqmBwIoabJmXLGVr4RYXtam5ir9ZN5A2rRLSgI=; b=PleOQRUdqY7Z8x0zCO1+zdG63jCAR+Uz6qoV6MrIqYnoI0WBAGwQEXtWrD3wY29l+L tNA27ecRpg9PKkKqQzi/s2XUKYm1HnpVQQHe76jn0CkTJNGgSDdFmCjTVBDWnaQIfmy1 ZbgMiPykLLSQUuBy3yauFT9NpYB8fyhXbn0xFBUr0EFTAhq0DqbUgVZlWhK+Rl6xxYPg STLiswuByFJhfVpmdu/WYhWaYMYWU3gj6RD29MSka6IPgLqh/TbJWlWsRGEv4Ja8bc9a 1BcZw40NPlNU7+oOPOgyLAPtBQg2tqw/9xM46JuJlnqBYotR0DhIAs0tRkan5F8drqlI BMnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=q5bXPaqmBwIoabJmXLGVr4RYXtam5ir9ZN5A2rRLSgI=; b=g3LlH0HUB3lv/g+5IkNpJ6FcqYA4s0tTGoxkWqmJhrSA0TKIpqTA4PGd6ta9PreXiS 5/wREbaaFY2YpjfpPTmlL3V9QeJ8iAR+DPCGAsJM6J1GZjJb8eyrfqN46qEEuEi2fuc4 VlmQAvxX48Ok3KujWLaJJfTEX9NlWIjF6QrJiXK3IZYSTFaXiFKMYG+0EmOc264HSEpI XmiBHGLlBed/3JO6h6oXesSWa1aJANedyyNE7pEv9OPL4hOOAN5DgvQZ5mOyxOk5zEgK DVUVvi9L7m9Q4indgfLYLgHK8s/p8+5li+qkITGw3fXAaYlYHsLmJrlD+rY3SQWG6FoE e+RA== X-Gm-Message-State: AOAM531krhjRsgeQKxpsFSpeC8yIG19nt68CLuKePBdlbmsf9iu1r81/ ziE4/yWA5DQpXztH2jmB1TJpZIhkvklMNA== X-Google-Smtp-Source: ABdhPJz3NbJCQ2a//c26TFZNSwGMUkuFQFgb2jNvXNSKI1PKV2e+vIMBWEfPVs0qvP9Q7R0DbRl5aw== X-Received: by 2002:a17:906:3ed0:: with SMTP id d16mr22414785ejj.16.1624254824917; Sun, 20 Jun 2021 22:53:44 -0700 (PDT) Received: from jwang-Latitude-5491.fritz.box ([2001:16b8:49f3:be00:dc22:f90e:1d6c:a47]) by smtp.gmail.com with ESMTPSA id i18sm1919617edc.7.2021.06.20.22.53.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 20 Jun 2021 22:53:44 -0700 (PDT) From: Jack Wang To: linux-rdma@vger.kernel.org Cc: bvanassche@acm.org, leon@kernel.org, dledford@redhat.com, jgg@ziepe.ca, axboe@kernel.dk, haris.iqbal@ionos.com, jinpu.wang@ionos.com, Jack Wang , Dima Stepanov Subject: [PATCH resend for-next 2/5] RDMA/rtrs-clt: Write path fast memory registration Date: Mon, 21 Jun 2021 07:53:37 +0200 Message-Id: <20210621055340.11789-3-jinpu.wang@ionos.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210621055340.11789-1-jinpu.wang@ionos.com> References: <20210621055340.11789-1-jinpu.wang@ionos.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Jack Wang With fast memory registration in write path, we can reduce the memory consumption by using less max_send_sge, support IO bigger than 116 KB (29 segments * 4 KB) without splitting, and it also make the IO path more symmetric. To avoid some times MR reg failed, waiting for the invalidation to finish before the new mr reg. Introduce a refcount, only finish the request when both local invalidation and io reply are there. Signed-off-by: Jack Wang Signed-off-by: Md Haris Iqbal Signed-off-by: Dima Stepanov --- drivers/infiniband/ulp/rtrs/rtrs-clt.c | 100 ++++++++++++++++++------- drivers/infiniband/ulp/rtrs/rtrs-clt.h | 1 + 2 files changed, 74 insertions(+), 27 deletions(-) diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c index 6b078e0df1fd..87edcec3e9e3 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c @@ -412,6 +412,7 @@ static void complete_rdma_req(struct rtrs_clt_io_req *req, int errno, req->inv_errno = errno; } + refcount_inc(&req->ref); err = rtrs_inv_rkey(req); if (unlikely(err)) { rtrs_err(con->c.sess, "Send INV WR key=%#x: %d\n", @@ -427,10 +428,14 @@ static void complete_rdma_req(struct rtrs_clt_io_req *req, int errno, return; } + if (!refcount_dec_and_test(&req->ref)) + return; } ib_dma_unmap_sg(sess->s.dev->ib_dev, req->sglist, req->sg_cnt, req->dir); } + if (!refcount_dec_and_test(&req->ref)) + return; if (sess->clt->mp_policy == MP_POLICY_MIN_INFLIGHT) atomic_dec(&sess->stats->inflight); @@ -438,10 +443,9 @@ static void complete_rdma_req(struct rtrs_clt_io_req *req, int errno, req->con = NULL; if (errno) { - rtrs_err_rl(con->c.sess, - "IO request failed: error=%d path=%s [%s:%u]\n", + rtrs_err_rl(con->c.sess, "IO request failed: error=%d path=%s [%s:%u] notify=%d\n", errno, kobject_name(&sess->kobj), sess->hca_name, - sess->hca_port); + sess->hca_port, notify); } if (notify) @@ -956,6 +960,7 @@ static void rtrs_clt_init_req(struct rtrs_clt_io_req *req, req->need_inv = false; req->need_inv_comp = false; req->inv_errno = 0; + refcount_set(&req->ref, 1); iov_iter_kvec(&iter, READ, vec, 1, usr_len); len = _copy_from_iter(req->iu->buf, usr_len, &iter); @@ -1000,7 +1005,7 @@ rtrs_clt_get_copy_req(struct rtrs_clt_sess *alive_sess, static int rtrs_post_rdma_write_sg(struct rtrs_clt_con *con, struct rtrs_clt_io_req *req, - struct rtrs_rbuf *rbuf, + struct rtrs_rbuf *rbuf, bool fr_en, u32 size, u32 imm, struct ib_send_wr *wr, struct ib_send_wr *tail) { @@ -1012,17 +1017,26 @@ static int rtrs_post_rdma_write_sg(struct rtrs_clt_con *con, int i; struct ib_send_wr *ptail = NULL; - for_each_sg(req->sglist, sg, req->sg_cnt, i) { - sge[i].addr = sg_dma_address(sg); - sge[i].length = sg_dma_len(sg); - sge[i].lkey = sess->s.dev->ib_pd->local_dma_lkey; + if (fr_en) { + i = 0; + sge[i].addr = req->mr->iova; + sge[i].length = req->mr->length; + sge[i].lkey = req->mr->lkey; + i++; + num_sge = 2; + ptail = tail; + } else { + for_each_sg(req->sglist, sg, req->sg_cnt, i) { + sge[i].addr = sg_dma_address(sg); + sge[i].length = sg_dma_len(sg); + sge[i].lkey = sess->s.dev->ib_pd->local_dma_lkey; + } + num_sge = 1 + req->sg_cnt; } sge[i].addr = req->iu->dma_addr; sge[i].length = size; sge[i].lkey = sess->s.dev->ib_pd->local_dma_lkey; - num_sge = 1 + req->sg_cnt; - /* * From time to time we have to post signalled sends, * or send queue will fill up and only QP reset can help. @@ -1038,6 +1052,21 @@ static int rtrs_post_rdma_write_sg(struct rtrs_clt_con *con, flags, wr, ptail); } +static int rtrs_map_sg_fr(struct rtrs_clt_io_req *req, size_t count) +{ + int nr; + + /* Align the MR to a 4K page size to match the block virt boundary */ + nr = ib_map_mr_sg(req->mr, req->sglist, count, NULL, SZ_4K); + if (nr < 0) + return nr; + if (unlikely(nr < req->sg_cnt)) + return -EINVAL; + ib_update_fast_reg_key(req->mr, ib_inc_rkey(req->mr->rkey)); + + return nr; +} + static int rtrs_clt_write_req(struct rtrs_clt_io_req *req) { struct rtrs_clt_con *con = req->con; @@ -1048,6 +1077,10 @@ static int rtrs_clt_write_req(struct rtrs_clt_io_req *req) struct rtrs_rbuf *rbuf; int ret, count = 0; u32 imm, buf_id; + struct ib_reg_wr rwr; + struct ib_send_wr inv_wr; + struct ib_send_wr *wr = NULL; + bool fr_en = false; const size_t tsize = sizeof(*msg) + req->data_len + req->usr_len; @@ -1076,15 +1109,43 @@ static int rtrs_clt_write_req(struct rtrs_clt_io_req *req) req->sg_size = tsize; rbuf = &sess->rbufs[buf_id]; + if (count) { + ret = rtrs_map_sg_fr(req, count); + if (ret < 0) { + rtrs_err_rl(s, + "Write request failed, failed to map fast reg. data, err: %d\n", + ret); + ib_dma_unmap_sg(sess->s.dev->ib_dev, req->sglist, + req->sg_cnt, req->dir); + return ret; + } + inv_wr = (struct ib_send_wr) { + .opcode = IB_WR_LOCAL_INV, + .wr_cqe = &req->inv_cqe, + .send_flags = IB_SEND_SIGNALED, + .ex.invalidate_rkey = req->mr->rkey, + }; + req->inv_cqe.done = rtrs_clt_inv_rkey_done; + rwr = (struct ib_reg_wr) { + .wr.opcode = IB_WR_REG_MR, + .wr.wr_cqe = &fast_reg_cqe, + .mr = req->mr, + .key = req->mr->rkey, + .access = (IB_ACCESS_LOCAL_WRITE), + }; + wr = &rwr.wr; + fr_en = true; + refcount_inc(&req->ref); + } /* * Update stats now, after request is successfully sent it is not * safe anymore to touch it. */ rtrs_clt_update_all_stats(req, WRITE); - ret = rtrs_post_rdma_write_sg(req->con, req, rbuf, + ret = rtrs_post_rdma_write_sg(req->con, req, rbuf, fr_en, req->usr_len + sizeof(*msg), - imm, NULL, NULL); + imm, wr, &inv_wr); if (unlikely(ret)) { rtrs_err_rl(s, "Write request failed: error=%d path=%s [%s:%u]\n", @@ -1100,21 +1161,6 @@ static int rtrs_clt_write_req(struct rtrs_clt_io_req *req) return ret; } -static int rtrs_map_sg_fr(struct rtrs_clt_io_req *req, size_t count) -{ - int nr; - - /* Align the MR to a 4K page size to match the block virt boundary */ - nr = ib_map_mr_sg(req->mr, req->sglist, count, NULL, SZ_4K); - if (nr < 0) - return nr; - if (unlikely(nr < req->sg_cnt)) - return -EINVAL; - ib_update_fast_reg_key(req->mr, ib_inc_rkey(req->mr->rkey)); - - return nr; -} - static int rtrs_clt_read_req(struct rtrs_clt_io_req *req) { struct rtrs_clt_con *con = req->con; diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.h b/drivers/infiniband/ulp/rtrs/rtrs-clt.h index eed2a20ee9be..e276a2dfcf7c 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.h +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.h @@ -116,6 +116,7 @@ struct rtrs_clt_io_req { int inv_errno; bool need_inv_comp; bool need_inv; + refcount_t ref; }; struct rtrs_rbuf { From patchwork Mon Jun 21 05:53:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinpu Wang X-Patchwork-Id: 12333843 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E3A3C48BE5 for ; Mon, 21 Jun 2021 05:53:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 792DA61156 for ; Mon, 21 Jun 2021 05:53:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229937AbhFUF4D (ORCPT ); Mon, 21 Jun 2021 01:56:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54436 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229618AbhFUF4B (ORCPT ); Mon, 21 Jun 2021 01:56:01 -0400 Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com [IPv6:2a00:1450:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB2D2C061574 for ; Sun, 20 Jun 2021 22:53:47 -0700 (PDT) Received: by mail-ej1-x636.google.com with SMTP id hz1so4166346ejc.1 for ; Sun, 20 Jun 2021 22:53:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ionos.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SqUood5gYzYaoZzQLKeGJe8MYTSMVLyUFoknlreDWAY=; b=eNkYuXIvUcU9koS9ADoqorY/aYChjmiDliKGi+2yioWD+KyFWt91OBgdFQAQOKUyUZ Tcn+P7S/DON/rqjHW7sM54mgHHNSN74VrXdnfFiTMMW+oLWHsnqxct7lgX0QE6f3eM00 JLCSXtEBtQcjpwhP2tiX+nw2QaEMLxe3vlkJpZUEnAPPMvgwW93F/t4qbPWxHaAKSlHU B+B0vFJTYHL0kcRf/XgvAXK8XkTJZQtUiHjDooLPOuOZKlecvxd3gwO4okw/dsaJrxzO iG06BpsbFOxgYWVn9y9a3T5GA9mjUTcvJsjYbrO2E4krAKxowtidPYwVgVZjB84EwseP /Mlw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SqUood5gYzYaoZzQLKeGJe8MYTSMVLyUFoknlreDWAY=; b=Vk5sNgTy5g7wpIGp4ME0dEdYmhs8dlzuN22QvoVLUO5GqXSNAkWVrTOWR7PI/vA4TL usRz9+wz4CXxZVwrVo+Qb9xGQK6N94VN30wvxtXvfFMPWGZZWiAuqAIIGxu4aWmLfdKS jcYdY3E0mmJuRD/chFSiI1XsdhfkTZUdDfWxRprFGdJXNvmB7Fxi058fzwl+ex2Y4+M4 lC59v1worctI+pku6MhQcYW9JjawukkUua7jgGw8TarQ1Tkfk7M0tBkt+EczPwa1t2JU nwTaw3zkDalyA1TPQrW9Ioihf/5lzt7mV9UD+c83fxLzGRfZgo6uE6dzk2fKvURMfTpS LjoQ== X-Gm-Message-State: AOAM533os98JXCt+r6qm1Vg+v3BjpL3ODOHpQcHJ7/83M1o2YFMUar0p tNAzwyYi55O+Qii3REQTwgTh/XZk1tR26A== X-Google-Smtp-Source: ABdhPJxErzBTlo+hf/ETbgV4jhqpe2XT6qeyNVsOKhjLelbJYyv1nz8QH+Y7+1/mEGDZFqyVPIhDgA== X-Received: by 2002:a17:906:26db:: with SMTP id u27mr22938001ejc.532.1624254826446; Sun, 20 Jun 2021 22:53:46 -0700 (PDT) Received: from jwang-Latitude-5491.fritz.box ([2001:16b8:49f3:be00:dc22:f90e:1d6c:a47]) by smtp.gmail.com with ESMTPSA id i18sm1919617edc.7.2021.06.20.22.53.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 20 Jun 2021 22:53:46 -0700 (PDT) From: Jack Wang To: linux-rdma@vger.kernel.org Cc: bvanassche@acm.org, leon@kernel.org, dledford@redhat.com, jgg@ziepe.ca, axboe@kernel.dk, haris.iqbal@ionos.com, jinpu.wang@ionos.com, Jack Wang , Md Haris Iqbal Subject: [PATCH resend for-next 3/5] RDMA/rtrs_clt: Alloc less memory with write path fast memory registration Date: Mon, 21 Jun 2021 07:53:38 +0200 Message-Id: <20210621055340.11789-4-jinpu.wang@ionos.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210621055340.11789-1-jinpu.wang@ionos.com> References: <20210621055340.11789-1-jinpu.wang@ionos.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Jack Wang With write path fast memory registration, we need less memory for each request. With fast memory registration, we can reduce max_send_sge to save memory usage. Also convert the kmalloc_array to kcalloc. Signed-off-by: Jack Wang Reviewed-by: Md Haris Iqbal --- drivers/infiniband/ulp/rtrs/rtrs-clt.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c index 87edcec3e9e3..3b25a375afc3 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c @@ -1372,8 +1372,7 @@ static int alloc_sess_reqs(struct rtrs_clt_sess *sess) if (!req->iu) goto out; - req->sge = kmalloc_array(clt->max_segments + 1, - sizeof(*req->sge), GFP_KERNEL); + req->sge = kcalloc(2, sizeof(*req->sge), GFP_KERNEL); if (!req->sge) goto out; @@ -1675,7 +1674,7 @@ static int create_con_cq_qp(struct rtrs_clt_con *con) sess->queue_depth * 3 + 1); max_recv_wr = min_t(int, wr_limit, sess->queue_depth * 3 + 1); - max_send_sge = sess->clt->max_segments + 1; + max_send_sge = 2; } cq_num = max_send_wr + max_recv_wr; /* alloc iu to recv new rkey reply when server reports flags set */ From patchwork Mon Jun 21 05:53:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinpu Wang X-Patchwork-Id: 12333845 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49233C49361 for ; Mon, 21 Jun 2021 05:53:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2FA4E61164 for ; Mon, 21 Jun 2021 05:53:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230039AbhFUF4E (ORCPT ); Mon, 21 Jun 2021 01:56:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54446 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229968AbhFUF4D (ORCPT ); Mon, 21 Jun 2021 01:56:03 -0400 Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com [IPv6:2a00:1450:4864:20::532]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46EF3C061574 for ; Sun, 20 Jun 2021 22:53:49 -0700 (PDT) Received: by mail-ed1-x532.google.com with SMTP id c7so16280373edn.6 for ; Sun, 20 Jun 2021 22:53:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ionos.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5qIwM8BQzLXVBphhd1xgyT5qd1pdYkylXZZiAHrlnIY=; b=AsgvvBz4my9DfKBwf8mzHxNgGZndodV6Dsa9NixkcnQMvlsfxr5Ubeq0WWpW/h+rP5 OuQLTd2Z8oDztzJ35cpsux+yoykZwnazt58BAavABlxqMi1YK9KqWhcyNhypZWSTFzf9 MrwYBsC5tWffYMBjpHYtPNHmk9oUZOmzFYYgaY3+0K3DcJXHWmx7jXMrJ19fX3xcVD5a ojAUh/kqX9mhAjArOch+pH/3OLPexAfPew7Mg0OQECs7NhkTf06uJouubpB55EB4nvGB +Jtj0ENwFQtNrJr/wICWYbXgrhA+UJH7V7oanacqB08y1fBJ0djmRO2V7/zsruvLYYAC P6mw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5qIwM8BQzLXVBphhd1xgyT5qd1pdYkylXZZiAHrlnIY=; b=oToo3Yk9KrcvS6AKeocF2zaejExCTkkg4BujQsLpjQtl+vJdrwvVxqC2htnGXEo9X3 amtkryIibFzXhHsPAE3kNmmheh49uEypL2LaFpvW0kbuPNlpLMEpnmo8KZUPaRt8hXzZ Cxu35RK1ZXj6GxZSy7sc58sXNwCThqfKAhXtzbe7NYn6YD4imj3Lr+9bAYsu/Vm88XXE CNlm8hX4lQTXajxy0wthbnMiRT1bTDCjeikWSWUk8/WWG02+qTeJuD2MApbjmqOwLRwd JvFj6PFbZeDSNhNSH0qHoWShwMZasEneWFSWFOLpSAi7mJ3KfhtpsGDqsVrK9Uj8D4qA PgCQ== X-Gm-Message-State: AOAM531N7g9nNb5BhHfH9v/cBt43l0buxzYGPjYum1aYg3kPVzoZDtOx menX9p0gDlWUo+vntrK3d6ca52/62ZXPBA== X-Google-Smtp-Source: ABdhPJyEAfUpYG4ekanQ89v40oRQCzFHmQ92jGa21zJlRaFqgVMdnB21XiFP0OlDqIFflQ1copu3Iw== X-Received: by 2002:aa7:cf0a:: with SMTP id a10mr19158554edy.329.1624254827796; Sun, 20 Jun 2021 22:53:47 -0700 (PDT) Received: from jwang-Latitude-5491.fritz.box ([2001:16b8:49f3:be00:dc22:f90e:1d6c:a47]) by smtp.gmail.com with ESMTPSA id i18sm1919617edc.7.2021.06.20.22.53.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 20 Jun 2021 22:53:47 -0700 (PDT) From: Jack Wang To: linux-rdma@vger.kernel.org Cc: bvanassche@acm.org, leon@kernel.org, dledford@redhat.com, jgg@ziepe.ca, axboe@kernel.dk, haris.iqbal@ionos.com, jinpu.wang@ionos.com, Jack Wang , Md Haris Iqbal Subject: [PATCH resend for-next 4/5] RDMA/rtrs-clt: Raise MAX_SEGMENTS Date: Mon, 21 Jun 2021 07:53:39 +0200 Message-Id: <20210621055340.11789-5-jinpu.wang@ionos.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210621055340.11789-1-jinpu.wang@ionos.com> References: <20210621055340.11789-1-jinpu.wang@ionos.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Jack Wang As we can do fast memory registration on write, we can increase the max_segments, default to 512K. Signed-off-by: Jack Wang Reviewed-by: Md Haris Iqbal --- drivers/infiniband/ulp/rtrs/rtrs-clt.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c index 3b25a375afc3..042110739941 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c @@ -32,6 +32,8 @@ #define RTRS_RECONNECT_SEED 8 #define FIRST_CONN 0x01 +/* limit to 128 * 4k = 512k max IO */ +#define RTRS_MAX_SEGMENTS 128 MODULE_DESCRIPTION("RDMA Transport Client"); MODULE_LICENSE("GPL"); @@ -1545,7 +1547,7 @@ static struct rtrs_clt_sess *alloc_sess(struct rtrs_clt *clt, rdma_addr_size((struct sockaddr *)path->src)); strscpy(sess->s.sessname, clt->sessname, sizeof(sess->s.sessname)); sess->clt = clt; - sess->max_pages_per_mr = max_segments; + sess->max_pages_per_mr = RTRS_MAX_SEGMENTS; init_waitqueue_head(&sess->state_wq); sess->state = RTRS_CLT_CONNECTING; atomic_set(&sess->connected_cnt, 0); @@ -2695,7 +2697,7 @@ static struct rtrs_clt *alloc_clt(const char *sessname, size_t paths_num, clt->paths_up = MAX_PATHS_NUM; clt->port = port; clt->pdu_sz = pdu_sz; - clt->max_segments = max_segments; + clt->max_segments = RTRS_MAX_SEGMENTS; clt->reconnect_delay_sec = reconnect_delay_sec; clt->max_reconnect_attempts = max_reconnect_attempts; clt->priv = priv; From patchwork Mon Jun 21 05:53:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinpu Wang X-Patchwork-Id: 12333847 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31C35C48BE5 for ; Mon, 21 Jun 2021 05:55:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0DF7860240 for ; Mon, 21 Jun 2021 05:55:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229597AbhFUF5m (ORCPT ); Mon, 21 Jun 2021 01:57:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54454 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229618AbhFUF4E (ORCPT ); Mon, 21 Jun 2021 01:56:04 -0400 Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com [IPv6:2a00:1450:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2943C061760 for ; Sun, 20 Jun 2021 22:53:50 -0700 (PDT) Received: by mail-ej1-x62f.google.com with SMTP id og14so26729585ejc.5 for ; Sun, 20 Jun 2021 22:53:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ionos.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ijpwdXfU4jHah3Oo2g8JYu3i3PUYSWgdqTlXPVgKNy4=; b=aHIPaEcswa5v163wt/1COtJQnqrxeXOgt/YP4FfRF70FQH9FrXgCGHR8RpWS0357DV /3/uW2ZJYhK5PVIEfrJ5KNlahJ7psCw8mqRCRvuwJKZPvSoL3gk/UCCwBrSVYgrtkwqZ UoVY1nscJU0xojZE1Fc765xDTM0sClay7idw3AdcPa8ftUkf3OHY6tteSZmN0EF45ccK pGZHXJeR4s2x8cqwTFiErg46wpal0/rW/AXAUQweKd0THxgHDHnW8OtO4DOW3irCccFF 5lF8mF5Y2YiYss6XGWY1JbfhQ/1jM30lKJtpe4uu/g+yoW0UesW3kxtMlyTAcSNDxwjo Reiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ijpwdXfU4jHah3Oo2g8JYu3i3PUYSWgdqTlXPVgKNy4=; b=FdzEgrgKH1OAMIeL9SbSP6svxZoK+cJtCG1TKyYI80Uu0gsa5lTUANtsaIPSbxesj5 sCeyg2NCljTVBMA8dCjSC7u5a8qeO9dtS9jDZEXqbUQyQs3icPz7fZQa7IPDXeATEUUy HJP3lqbx4kvaxl4SM8XQD6h8OqEt0LgMpRPlGA4FrqQytZuW29eRkCl9QS4IzRd/iKBW MsCnBKGhYTrnUjy4wcTEO1v+aItUGh6j8STOGRfS3VdPmQv29JJTG5vQoHKi9/QD44tn suvyjV5V57olVnf/iSb73k0oqRMj/s0nhFGEAfDQOEwLRbeFbur9jmrd1YNIpmxKyKux /3xQ== X-Gm-Message-State: AOAM530++7MOO02/2TyMlil11M+XJqhnmoKuU/8+cnuFPhFY8R5zoOxu zMpYXVrtpNgxbtCSWzDHdTVII3rwSyyj2Q== X-Google-Smtp-Source: ABdhPJwnWSBIxHhKTe02XLtkhahTEHCUqkwwuPUYRhCDV6W+B6Rle+pGJ9ZRZ5CoE7tOBRGXYVSu7g== X-Received: by 2002:a17:907:3fa7:: with SMTP id hr39mr1032173ejc.23.1624254829183; Sun, 20 Jun 2021 22:53:49 -0700 (PDT) Received: from jwang-Latitude-5491.fritz.box ([2001:16b8:49f3:be00:dc22:f90e:1d6c:a47]) by smtp.gmail.com with ESMTPSA id i18sm1919617edc.7.2021.06.20.22.53.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 20 Jun 2021 22:53:49 -0700 (PDT) From: Jack Wang To: linux-rdma@vger.kernel.org Cc: bvanassche@acm.org, leon@kernel.org, dledford@redhat.com, jgg@ziepe.ca, axboe@kernel.dk, haris.iqbal@ionos.com, jinpu.wang@ionos.com, Jack Wang , Md Haris Iqbal Subject: [PATCH resend for-next 5/5] rnbd/rtrs-clt: Query and use max_segments from rtrs-clt. Date: Mon, 21 Jun 2021 07:53:40 +0200 Message-Id: <20210621055340.11789-6-jinpu.wang@ionos.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210621055340.11789-1-jinpu.wang@ionos.com> References: <20210621055340.11789-1-jinpu.wang@ionos.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Jack Wang With fast memory registration on write request, rnbd-clt can do bigger IO without split. rnbd-clt now can query rtrs-clt to get the max_segments, instead of using BMAX_SEGMENTS. BMAX_SEGMENTS is not longer needed, so remove it. Cc: Jens Axboe Signed-off-by: Jack Wang Reviewed-by: Md Haris Iqbal --- drivers/block/rnbd/rnbd-clt.c | 5 +++-- drivers/block/rnbd/rnbd-clt.h | 5 +---- drivers/infiniband/ulp/rtrs/rtrs-clt.c | 18 ++++++++---------- drivers/infiniband/ulp/rtrs/rtrs.h | 2 +- 4 files changed, 13 insertions(+), 17 deletions(-) diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c index c604a402cd5c..d6f12e6c91f7 100644 --- a/drivers/block/rnbd/rnbd-clt.c +++ b/drivers/block/rnbd/rnbd-clt.c @@ -92,7 +92,7 @@ static int rnbd_clt_set_dev_attr(struct rnbd_clt_dev *dev, dev->fua = !!(rsp->cache_policy & RNBD_FUA); dev->max_hw_sectors = sess->max_io_size / SECTOR_SIZE; - dev->max_segments = BMAX_SEGMENTS; + dev->max_segments = sess->max_segments; return 0; } @@ -1292,7 +1292,7 @@ find_and_get_or_create_sess(const char *sessname, sess->rtrs = rtrs_clt_open(&rtrs_ops, sessname, paths, path_cnt, port_nr, 0, /* Do not use pdu of rtrs */ - RECONNECT_DELAY, BMAX_SEGMENTS, + RECONNECT_DELAY, MAX_RECONNECTS, nr_poll_queues); if (IS_ERR(sess->rtrs)) { err = PTR_ERR(sess->rtrs); @@ -1306,6 +1306,7 @@ find_and_get_or_create_sess(const char *sessname, sess->max_io_size = attrs.max_io_size; sess->queue_depth = attrs.queue_depth; sess->nr_poll_queues = nr_poll_queues; + sess->max_segments = attrs.max_segments; err = setup_mq_tags(sess); if (err) diff --git a/drivers/block/rnbd/rnbd-clt.h b/drivers/block/rnbd/rnbd-clt.h index b5322c5aaac0..9ef8c4f306f2 100644 --- a/drivers/block/rnbd/rnbd-clt.h +++ b/drivers/block/rnbd/rnbd-clt.h @@ -20,10 +20,6 @@ #include "rnbd-proto.h" #include "rnbd-log.h" -/* Max. number of segments per IO request, Mellanox Connect X ~ Connect X5, - * choose minimial 30 for all, minus 1 for internal protocol, so 29. - */ -#define BMAX_SEGMENTS 29 /* time in seconds between reconnect tries, default to 30 s */ #define RECONNECT_DELAY 30 /* @@ -89,6 +85,7 @@ struct rnbd_clt_session { atomic_t busy; size_t queue_depth; u32 max_io_size; + u32 max_segments; struct blk_mq_tag_set tag_set; u32 nr_poll_queues; struct mutex lock; /* protects state and devs_list */ diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c index 042110739941..d7aed4388765 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c @@ -1357,7 +1357,6 @@ static void free_sess_reqs(struct rtrs_clt_sess *sess) static int alloc_sess_reqs(struct rtrs_clt_sess *sess) { struct rtrs_clt_io_req *req; - struct rtrs_clt *clt = sess->clt; int i, err = -ENOMEM; sess->reqs = kcalloc(sess->queue_depth, sizeof(*sess->reqs), @@ -1466,6 +1465,8 @@ static void query_fast_reg_mode(struct rtrs_clt_sess *sess) sess->max_pages_per_mr = min3(sess->max_pages_per_mr, (u32)max_pages_per_mr, ib_dev->attrs.max_fast_reg_page_list_len); + sess->clt->max_segments = + min(sess->max_pages_per_mr, sess->clt->max_segments); } static bool rtrs_clt_change_state_get_old(struct rtrs_clt_sess *sess, @@ -1503,9 +1504,8 @@ static void rtrs_clt_reconnect_work(struct work_struct *work); static void rtrs_clt_close_work(struct work_struct *work); static struct rtrs_clt_sess *alloc_sess(struct rtrs_clt *clt, - const struct rtrs_addr *path, - size_t con_num, u16 max_segments, - u32 nr_poll_queues) + const struct rtrs_addr *path, + size_t con_num, u32 nr_poll_queues) { struct rtrs_clt_sess *sess; int err = -ENOMEM; @@ -2668,7 +2668,6 @@ static struct rtrs_clt *alloc_clt(const char *sessname, size_t paths_num, u16 port, size_t pdu_sz, void *priv, void (*link_ev)(void *priv, enum rtrs_clt_link_ev ev), - unsigned int max_segments, unsigned int reconnect_delay_sec, unsigned int max_reconnect_attempts) { @@ -2766,7 +2765,6 @@ static void free_clt(struct rtrs_clt *clt) * @port: port to be used by the RTRS session * @pdu_sz: Size of extra payload which can be accessed after permit allocation. * @reconnect_delay_sec: time between reconnect tries - * @max_segments: Max. number of segments per IO request * @max_reconnect_attempts: Number of times to reconnect on error before giving * up, 0 for * disabled, -1 for forever * @nr_poll_queues: number of polling mode connection using IB_POLL_DIRECT flag @@ -2781,7 +2779,6 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops, const struct rtrs_addr *paths, size_t paths_num, u16 port, size_t pdu_sz, u8 reconnect_delay_sec, - u16 max_segments, s16 max_reconnect_attempts, u32 nr_poll_queues) { struct rtrs_clt_sess *sess, *tmp; @@ -2790,7 +2787,7 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops, clt = alloc_clt(sessname, paths_num, port, pdu_sz, ops->priv, ops->link_ev, - max_segments, reconnect_delay_sec, + reconnect_delay_sec, max_reconnect_attempts); if (IS_ERR(clt)) { err = PTR_ERR(clt); @@ -2800,7 +2797,7 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops, struct rtrs_clt_sess *sess; sess = alloc_sess(clt, &paths[i], nr_cpu_ids, - max_segments, nr_poll_queues); + nr_poll_queues); if (IS_ERR(sess)) { err = PTR_ERR(sess); goto close_all_sess; @@ -3062,6 +3059,7 @@ int rtrs_clt_query(struct rtrs_clt *clt, struct rtrs_attrs *attr) return -ECOMM; attr->queue_depth = clt->queue_depth; + attr->max_segments = clt->max_segments; /* Cap max_io_size to min of remote buffer size and the fr pages */ attr->max_io_size = min_t(int, clt->max_io_size, clt->max_segments * SZ_4K); @@ -3076,7 +3074,7 @@ int rtrs_clt_create_path_from_sysfs(struct rtrs_clt *clt, struct rtrs_clt_sess *sess; int err; - sess = alloc_sess(clt, addr, nr_cpu_ids, clt->max_segments, 0); + sess = alloc_sess(clt, addr, nr_cpu_ids, 0); if (IS_ERR(sess)) return PTR_ERR(sess); diff --git a/drivers/infiniband/ulp/rtrs/rtrs.h b/drivers/infiniband/ulp/rtrs/rtrs.h index dc3e1af1a85b..859c79685daf 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs.h +++ b/drivers/infiniband/ulp/rtrs/rtrs.h @@ -57,7 +57,6 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops, const struct rtrs_addr *paths, size_t path_cnt, u16 port, size_t pdu_sz, u8 reconnect_delay_sec, - u16 max_segments, s16 max_reconnect_attempts, u32 nr_poll_queues); void rtrs_clt_close(struct rtrs_clt *sess); @@ -110,6 +109,7 @@ int rtrs_clt_rdma_cq_direct(struct rtrs_clt *clt, unsigned int index); struct rtrs_attrs { u32 queue_depth; u32 max_io_size; + u32 max_segments; }; int rtrs_clt_query(struct rtrs_clt *sess, struct rtrs_attrs *attr);