From patchwork Tue Jun 8 04:25:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12305347 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01F03C4743D for ; Tue, 8 Jun 2021 04:26:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DAE5861249 for ; Tue, 8 Jun 2021 04:26:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229518AbhFHE2V (ORCPT ); Tue, 8 Jun 2021 00:28:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229451AbhFHE2U (ORCPT ); Tue, 8 Jun 2021 00:28:20 -0400 Received: from mail-ot1-x332.google.com (mail-ot1-x332.google.com [IPv6:2607:f8b0:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 54464C061574 for ; Mon, 7 Jun 2021 21:26:13 -0700 (PDT) Received: by mail-ot1-x332.google.com with SMTP id o17-20020a9d76510000b02903eabfc221a9so5351128otl.0 for ; Mon, 07 Jun 2021 21:26:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Z3nBU4nWuFXzPDNwaCUYYXL4sxtZG5/eFNOVVA6Idok=; b=j3+ARyvtwOVgL3Hp/cVbyosE4HhARZEsAx+zxMI/73P/2NKFEKpr9h+cP+t8JrHGF5 bLtFFKbv3IZ+rdqva1pYxjJJqu/P7ZaSIRzj0VC/LCwO+jEKfPtptsoj83tKbsdNv0p2 Pu5FXEKAisAkdHzJO1LR5qOtZ5rh3ry5Sxrx69XTTEcCpVbLxewnCh5OWh5FlGfu5ymm /evGV6HTj4MFCvPdccOOUJHh5o3Jh67xy4fzHraIvR7bcfACA6v5SsFTftflColzLjnY +X3yeZSzIbUan7twhYvx68wfLDlg+HFLjHFgYsJR6DdiSDUl9gAV8av2KLUWn4bKaipO oh9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Z3nBU4nWuFXzPDNwaCUYYXL4sxtZG5/eFNOVVA6Idok=; b=WdHLKNS/Y/PalF90EYNGBXK9ezDr8AYdg8AiCjULlHSMlBOexIDJ6vnq4CpNzH5Luo 9FS7Tioy7PA5oLBi/Dty4VqK7L5dzBX05vLCU7XiqQf68f0Sc2UgeqZkKuAQt3W4F2UU uZcevciqP2WTiRoU1uoX75DXmvl6NTUm2TA4MMEYfAh3QsHZIl/6pE5G0qqpEtQS7iFx CgnWxVIODH5qOc3pKJyzwk76fvAPnME/p0SKbW5a18OuPdTVaFWJVIzJ93LRlWaglhTx YmNhKDmId6qksvCzvAn4lBMWws3m5um3fNK5t0YRbs40IthUGpECiu/vtBY6fKXWSrmj uLSg== X-Gm-Message-State: AOAM5324wgVPxlAeaRdxNkydcSDC2jOUtfPadxy8TKkQJwn/28H4eoKM 4J9itHirDYka7e0a37D/j3k= X-Google-Smtp-Source: ABdhPJwwt5Om4bmoscvY0rtfDvZeabPja0WffyPpZ1BrdYblpm24zdjMl7z3+rrl0yp+fcoZGuPyDQ== X-Received: by 2002:a05:6830:2117:: with SMTP id i23mr9193632otc.279.1623126372590; Mon, 07 Jun 2021 21:26:12 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-cb25-4f27-0965-41cc.res6.spectrum.com. [2603:8081:140c:1a00:cb25:4f27:965:41cc]) by smtp.gmail.com with ESMTPSA id a14sm2805088otl.52.2021.06.07.21.26.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Jun 2021 21:26:12 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, monis@mellanox.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v9 1/9] RDMA/rxe: Add bind MW fields to rxe_send_wr Date: Mon, 7 Jun 2021 23:25:44 -0500 Message-Id: <20210608042552.33275-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210608042552.33275-1-rpearsonhpe@gmail.com> References: <20210608042552.33275-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add fields to struct rxe_send_wr in rdma_user_rxe.h to support bind MW work requests Signed-off-by: Bob Pearson --- v8: Dropped the flags field in wr.mw which was no longer needed. The size of the mw struct broke binary compatibility because it extended the size of the wr union beyond 40 bytes. Reported-by: Zhu Yanjun --- include/uapi/rdma/rdma_user_rxe.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/include/uapi/rdma/rdma_user_rxe.h b/include/uapi/rdma/rdma_user_rxe.h index 068433e2229d..e283c2220aba 100644 --- a/include/uapi/rdma/rdma_user_rxe.h +++ b/include/uapi/rdma/rdma_user_rxe.h @@ -99,7 +99,16 @@ struct rxe_send_wr { __u32 remote_qkey; __u16 pkey_index; } ud; + struct { + __aligned_u64 addr; + __aligned_u64 length; + __u32 mr_lkey; + __u32 mw_rkey; + __u32 rkey; + __u32 access; + } mw; /* reg is only used by the kernel and is not part of the uapi */ +#ifdef __KERNEL__ struct { union { struct ib_mr *mr; @@ -108,6 +117,7 @@ struct rxe_send_wr { __u32 key; __u32 access; } reg; +#endif } wr; }; From patchwork Tue Jun 8 04:25:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12305357 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE300C47082 for ; Tue, 8 Jun 2021 04:27:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BAE0A60240 for ; Tue, 8 Jun 2021 04:27:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229536AbhFHE3H (ORCPT ); Tue, 8 Jun 2021 00:29:07 -0400 Received: from mail-oi1-f170.google.com ([209.85.167.170]:33757 "EHLO mail-oi1-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229462AbhFHE3G (ORCPT ); Tue, 8 Jun 2021 00:29:06 -0400 Received: by mail-oi1-f170.google.com with SMTP id t140so15091785oih.0 for ; Mon, 07 Jun 2021 21:27:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JaVBFll1fzHgw1AwqfCJ/5vxLd0wSQ5LJxKK9vIPE14=; b=oGFcZu0CluQkmObpBseNLIvZXxudcZenBXGqBPwRy7d8X2fBFaFzjvIYZc9RqZDifR 5cFp+dA5HI3WBz3KBNKWT+MYN5mxQmNsWYn3XDNyDfyu2gi5QmYTkRqgBNoiiH3QQnQR 0bRN4w6LNiPa6Ncmcy2RbcRQBhcW1itsnDeZNytrPganau52si3eJ+ja2UB4GJvDFJ2M 4ek35XGENA8LvbHerKAnbYXUOzmgm61huMuD7lRV6Z+dOryPVsS1tr78i2z7T3zUIUCc TRoByinD2PF14mQoce4u0sKrLhxwMHo6n5h19nJ3xrbzAE/+Uecuws8Vn9hW0KXGy7bR 8J8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JaVBFll1fzHgw1AwqfCJ/5vxLd0wSQ5LJxKK9vIPE14=; b=tXVKF78oNACd3yJMBherEvCW0oWgXOct+viHMX3A5+RIsGceHCJSGRgX+cKifWDTUy wpSw7yaDLyEhiL8jkKNBLtApwErJljQTBzDDysK1OOqeei+JysCbxa0mibPUQfqL7Jxm nRcvlBST9Rko4Y3GtmCCAbwJi2uEERYLOVqb2PdeE374DN6OG+zaRlF5PZ1CVSR3jKvI CmPk367lwv52WVF8Zso75j5/rDmuoKsg/lbzgxKwVG0PMlPLC+f/P03EB4wLmPbe1ixi ZvklBpCE1EVWCG/WLMlGdjz4e2glrtTydzTpfU0FvK0yPTv2LeP6TAFDpi9B1ciZ4ykR t6YQ== X-Gm-Message-State: AOAM531O7B5FuwSYLCUZ+N7M6989RBxe64foyVxHrMxZ9J+w1hSPZiYz TonPqTLJ2QXdbNDaQHKDAWs= X-Google-Smtp-Source: ABdhPJyUiJZTD05id0gEKtVSBGCbZZNTSIdKWqFxqNT/3h1jRAc3FiqQkfWKdZrmFus3jEeY8DsrOQ== X-Received: by 2002:aca:4cc3:: with SMTP id z186mr1635326oia.73.1623126373356; Mon, 07 Jun 2021 21:26:13 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-cb25-4f27-0965-41cc.res6.spectrum.com. [2603:8081:140c:1a00:cb25:4f27:965:41cc]) by smtp.gmail.com with ESMTPSA id f19sm2974143ots.41.2021.06.07.21.26.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Jun 2021 21:26:13 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, monis@mellanox.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v9 02/10] RDMA/rxe: Return errors for add index and key Date: Mon, 7 Jun 2021 23:25:45 -0500 Message-Id: <20210608042552.33275-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210608042552.33275-1-rpearsonhpe@gmail.com> References: <20210608042552.33275-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Modify rxe_add_index() and rxe_add_key() to return an error if the index or key is aleady present in the pool. Currently they print a warning and silently fail with bad consequences to the caller. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 44 ++++++++++++++++++---------- drivers/infiniband/sw/rxe/rxe_pool.h | 8 ++--- 2 files changed, 32 insertions(+), 20 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index d24901f2af3f..df0bec719341 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -183,7 +183,7 @@ static u32 alloc_index(struct rxe_pool *pool) return index + pool->index.min_index; } -static void insert_index(struct rxe_pool *pool, struct rxe_pool_entry *new) +static int rxe_insert_index(struct rxe_pool *pool, struct rxe_pool_entry *new) { struct rb_node **link = &pool->index.tree.rb_node; struct rb_node *parent = NULL; @@ -195,7 +195,7 @@ static void insert_index(struct rxe_pool *pool, struct rxe_pool_entry *new) if (elem->index == new->index) { pr_warn("element already exists!\n"); - goto out; + return -EINVAL; } if (elem->index > new->index) @@ -206,11 +206,11 @@ static void insert_index(struct rxe_pool *pool, struct rxe_pool_entry *new) rb_link_node(&new->index_node, parent, link); rb_insert_color(&new->index_node, &pool->index.tree); -out: - return; + + return 0; } -static void insert_key(struct rxe_pool *pool, struct rxe_pool_entry *new) +static int rxe_insert_key(struct rxe_pool *pool, struct rxe_pool_entry *new) { struct rb_node **link = &pool->key.tree.rb_node; struct rb_node *parent = NULL; @@ -226,7 +226,7 @@ static void insert_key(struct rxe_pool *pool, struct rxe_pool_entry *new) if (cmp == 0) { pr_warn("key already exists!\n"); - goto out; + return -EINVAL; } if (cmp > 0) @@ -237,26 +237,32 @@ static void insert_key(struct rxe_pool *pool, struct rxe_pool_entry *new) rb_link_node(&new->key_node, parent, link); rb_insert_color(&new->key_node, &pool->key.tree); -out: - return; + + return 0; } -void __rxe_add_key_locked(struct rxe_pool_entry *elem, void *key) +int __rxe_add_key_locked(struct rxe_pool_entry *elem, void *key) { struct rxe_pool *pool = elem->pool; + int err; memcpy((u8 *)elem + pool->key.key_offset, key, pool->key.key_size); - insert_key(pool, elem); + err = rxe_insert_key(pool, elem); + + return err; } -void __rxe_add_key(struct rxe_pool_entry *elem, void *key) +int __rxe_add_key(struct rxe_pool_entry *elem, void *key) { struct rxe_pool *pool = elem->pool; unsigned long flags; + int err; write_lock_irqsave(&pool->pool_lock, flags); - __rxe_add_key_locked(elem, key); + err = __rxe_add_key_locked(elem, key); write_unlock_irqrestore(&pool->pool_lock, flags); + + return err; } void __rxe_drop_key_locked(struct rxe_pool_entry *elem) @@ -276,22 +282,28 @@ void __rxe_drop_key(struct rxe_pool_entry *elem) write_unlock_irqrestore(&pool->pool_lock, flags); } -void __rxe_add_index_locked(struct rxe_pool_entry *elem) +int __rxe_add_index_locked(struct rxe_pool_entry *elem) { struct rxe_pool *pool = elem->pool; + int err; elem->index = alloc_index(pool); - insert_index(pool, elem); + err = rxe_insert_index(pool, elem); + + return err; } -void __rxe_add_index(struct rxe_pool_entry *elem) +int __rxe_add_index(struct rxe_pool_entry *elem) { struct rxe_pool *pool = elem->pool; unsigned long flags; + int err; write_lock_irqsave(&pool->pool_lock, flags); - __rxe_add_index_locked(elem); + err = __rxe_add_index_locked(elem); write_unlock_irqrestore(&pool->pool_lock, flags); + + return err; } void __rxe_drop_index_locked(struct rxe_pool_entry *elem) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 61210b300a78..1feca1bffced 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -111,11 +111,11 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem); /* assign an index to an indexed object and insert object into * pool's rb tree holding and not holding the pool_lock */ -void __rxe_add_index_locked(struct rxe_pool_entry *elem); +int __rxe_add_index_locked(struct rxe_pool_entry *elem); #define rxe_add_index_locked(obj) __rxe_add_index_locked(&(obj)->pelem) -void __rxe_add_index(struct rxe_pool_entry *elem); +int __rxe_add_index(struct rxe_pool_entry *elem); #define rxe_add_index(obj) __rxe_add_index(&(obj)->pelem) @@ -133,11 +133,11 @@ void __rxe_drop_index(struct rxe_pool_entry *elem); /* assign a key to a keyed object and insert object into * pool's rb tree holding and not holding pool_lock */ -void __rxe_add_key_locked(struct rxe_pool_entry *elem, void *key); +int __rxe_add_key_locked(struct rxe_pool_entry *elem, void *key); #define rxe_add_key_locked(obj, key) __rxe_add_key_locked(&(obj)->pelem, key) -void __rxe_add_key(struct rxe_pool_entry *elem, void *key); +int __rxe_add_key(struct rxe_pool_entry *elem, void *key); #define rxe_add_key(obj, key) __rxe_add_key(&(obj)->pelem, key) From patchwork Tue Jun 8 04:25:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12305361 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D058C47082 for ; Tue, 8 Jun 2021 04:27:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4615461249 for ; Tue, 8 Jun 2021 04:27:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230197AbhFHE3W (ORCPT ); Tue, 8 Jun 2021 00:29:22 -0400 Received: from mail-ot1-f50.google.com ([209.85.210.50]:40755 "EHLO mail-ot1-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229462AbhFHE3W (ORCPT ); Tue, 8 Jun 2021 00:29:22 -0400 Received: by mail-ot1-f50.google.com with SMTP id c31-20020a056830349fb02903a5bfa6138bso19062356otu.7 for ; Mon, 07 Jun 2021 21:27:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Na5RYhOx1UZ6M2KWudgqiODmlIAKYl+YyFBCmYHRPGs=; b=Hq3yF2zgImnEescKSG5k8lJQBPg5bi5qsOdyANJYKgg+OV1cl+EkwvlIVWW7+/mPiT +MUBSzaHVQURXVYJ4YKp7t+3FMAMZFqx9vbuSK8NEj7tq475L1tRC1CUBKo+GEcyyv+E sw81KkViAS2fVIp9u8VXxK8+rH5GqdXBTBiklBgf/zBccWLLsjMBvhEfPA+NzCEDDJj8 0hiL54dncdZ1jsSQpny0Rs/o5XuD80CmaqLlU+/vI2CWYkAjzpBElYJg3UfPq9JDX9HG vX+Nmyj2roT5u9HsJhobfvdM20enoKG89doU4AVADdukBrKxpz7uIAdyQCOt6lQXzEtr CsAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Na5RYhOx1UZ6M2KWudgqiODmlIAKYl+YyFBCmYHRPGs=; b=qbhd3k0b+G+IaNfYtjMs779SFFTrmQjHGyLic+hk500c/d8tF2MpXsOHMqThrV404q pl8ghheClvoCuPR+pTi3i7zdNtXemWYeAHY4ysYXHDmMZF+a73UTdxgZY6tS78evAqJU O75jTEUw9RtGh1zeinnaeLmSYZFv+cUcuUESB342DSeiJZ/BGWxNxqqMksMlWZbGMBS8 YhEjyDh0wU5dHl0XT4hvtzVltfbeiB8rpQRQMddlgPNUndCiT25Uy2JfVDKYbvwnVCJY 0va4Il2kL+wjgmOCjsKQVNIQ73YAkhIRW1451YU6ziEtQohThb9Q/ofG6cnYcS4XYAOd ULcA== X-Gm-Message-State: AOAM533KKH9sdRsW0xmbxgQCk8pdALRrN4jPVsnJXbG/HlosEefbZGaF DtqX4gYHeJxZY2iYQkkF2jE= X-Google-Smtp-Source: ABdhPJwy5CH7A8wnMehgfE8ACah+m4hi9H3nF+TrbuJAwR8j/T0rfPnEIXi4icD1rGrcwsPxaMPByg== X-Received: by 2002:a9d:6f0f:: with SMTP id n15mr6754390otq.113.1623126374116; Mon, 07 Jun 2021 21:26:14 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-cb25-4f27-0965-41cc.res6.spectrum.com. [2603:8081:140c:1a00:cb25:4f27:965:41cc]) by smtp.gmail.com with ESMTPSA id r83sm2631277oih.48.2021.06.07.21.26.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Jun 2021 21:26:13 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, monis@mellanox.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v9 03/10] RDMA/rxe: Enable MW object pool Date: Mon, 7 Jun 2021 23:25:46 -0500 Message-Id: <20210608042552.33275-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210608042552.33275-1-rpearsonhpe@gmail.com> References: <20210608042552.33275-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently the rxe driver has a rxe_mw struct object but nothing about memory windows is enabled. This patch turns on memory windows and some minor cleanup. Set device attribute in rxe.c so max_mw = MAX_MW. Change parameters in rxe_param.h so that MAX_MW is the same as MAX_MR. Reduce the number of MRs and MWs to 4K from 256K. Add device capability bits for 2a and 2b memory windows. Removed RXE_MR_TYPE_MW from the rxe_mr_type enum. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 1 + drivers/infiniband/sw/rxe/rxe_param.h | 19 ++++++++++++------- drivers/infiniband/sw/rxe/rxe_verbs.h | 1 - 3 files changed, 13 insertions(+), 8 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 95f0de0c8b49..8e0f9c489cab 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -54,6 +54,7 @@ static void rxe_init_device_param(struct rxe_dev *rxe) rxe->attr.max_cq = RXE_MAX_CQ; rxe->attr.max_cqe = (1 << RXE_MAX_LOG_CQE) - 1; rxe->attr.max_mr = RXE_MAX_MR; + rxe->attr.max_mw = RXE_MAX_MW; rxe->attr.max_pd = RXE_MAX_PD; rxe->attr.max_qp_rd_atom = RXE_MAX_QP_RD_ATOM; rxe->attr.max_res_rd_atom = RXE_MAX_RES_RD_ATOM; diff --git a/drivers/infiniband/sw/rxe/rxe_param.h b/drivers/infiniband/sw/rxe/rxe_param.h index 25ab50d9b7c2..742e6ec93686 100644 --- a/drivers/infiniband/sw/rxe/rxe_param.h +++ b/drivers/infiniband/sw/rxe/rxe_param.h @@ -37,7 +37,6 @@ static inline enum ib_mtu eth_mtu_int_to_enum(int mtu) enum rxe_device_param { RXE_MAX_MR_SIZE = -1ull, RXE_PAGE_SIZE_CAP = 0xfffff000, - RXE_MAX_QP = 0x10000, RXE_MAX_QP_WR = 0x4000, RXE_DEVICE_CAP_FLAGS = IB_DEVICE_BAD_PKEY_CNTR | IB_DEVICE_BAD_QKEY_CNTR @@ -49,7 +48,10 @@ enum rxe_device_param { | IB_DEVICE_RC_RNR_NAK_GEN | IB_DEVICE_SRQ_RESIZE | IB_DEVICE_MEM_MGT_EXTENSIONS - | IB_DEVICE_ALLOW_USER_UNREG, + | IB_DEVICE_ALLOW_USER_UNREG + | IB_DEVICE_MEM_WINDOW + | IB_DEVICE_MEM_WINDOW_TYPE_2A + | IB_DEVICE_MEM_WINDOW_TYPE_2B, RXE_MAX_SGE = 32, RXE_MAX_WQE_SIZE = sizeof(struct rxe_send_wqe) + sizeof(struct ib_sge) * RXE_MAX_SGE, @@ -58,7 +60,6 @@ enum rxe_device_param { RXE_MAX_SGE_RD = 32, RXE_MAX_CQ = 16384, RXE_MAX_LOG_CQE = 15, - RXE_MAX_MR = 256 * 1024, RXE_MAX_PD = 0x7ffc, RXE_MAX_QP_RD_ATOM = 128, RXE_MAX_RES_RD_ATOM = 0x3f000, @@ -67,7 +68,6 @@ enum rxe_device_param { RXE_MAX_MCAST_QP_ATTACH = 56, RXE_MAX_TOT_MCAST_QP_ATTACH = 0x70000, RXE_MAX_AH = 100, - RXE_MAX_SRQ = 960, RXE_MAX_SRQ_WR = 0x4000, RXE_MIN_SRQ_WR = 1, RXE_MAX_SRQ_SGE = 27, @@ -80,16 +80,21 @@ enum rxe_device_param { RXE_NUM_PORT = 1, + RXE_MAX_QP = 0x10000, RXE_MIN_QP_INDEX = 16, RXE_MAX_QP_INDEX = 0x00020000, + RXE_MAX_SRQ = 0x00001000, RXE_MIN_SRQ_INDEX = 0x00020001, RXE_MAX_SRQ_INDEX = 0x00040000, + RXE_MAX_MR = 0x00001000, + RXE_MAX_MW = 0x00001000, RXE_MIN_MR_INDEX = 0x00000001, - RXE_MAX_MR_INDEX = 0x00040000, - RXE_MIN_MW_INDEX = 0x00040001, - RXE_MAX_MW_INDEX = 0x00060000, + RXE_MAX_MR_INDEX = 0x00010000, + RXE_MIN_MW_INDEX = 0x00010001, + RXE_MAX_MW_INDEX = 0x00020000, + RXE_MAX_PKT_PER_ACK = 64, RXE_MAX_UNACKED_PSNS = 128, diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 11eba7a3ba8f..8d32e3f50813 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -273,7 +273,6 @@ enum rxe_mr_type { RXE_MR_TYPE_NONE, RXE_MR_TYPE_DMA, RXE_MR_TYPE_MR, - RXE_MR_TYPE_MW, }; #define RXE_BUF_PER_MAP (PAGE_SIZE / sizeof(struct rxe_phys_buf)) From patchwork Tue Jun 8 04:25:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12305359 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C601C47082 for ; Tue, 8 Jun 2021 04:27:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7329860240 for ; Tue, 8 Jun 2021 04:27:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230190AbhFHE3S (ORCPT ); Tue, 8 Jun 2021 00:29:18 -0400 Received: from mail-ot1-f48.google.com ([209.85.210.48]:44693 "EHLO mail-ot1-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229462AbhFHE3S (ORCPT ); Tue, 8 Jun 2021 00:29:18 -0400 Received: by mail-ot1-f48.google.com with SMTP id q5-20020a9d66450000b02903f18d65089fso2859185otm.11 for ; Mon, 07 Jun 2021 21:27:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7gvD+eDtxHWqWm0I2JlMrE06qVaL088iwj9Q2nGSlMA=; b=Pen7bwXiHWp1h9LR59ZEwYzHZ4CV7/teEfTAtjgjfcReUSHICW3MTt0UI8nCTOBuvc FuNuHRXS0fTd/njpgdIzmUqPHceZJX/Cn41haB1LOaCrR130tj+gigZZtajatMB7MgUS z6wXqswr0bWjS2OOskKZU5B2ng2MwSrGdak8xuS7677ZZZLXK7iL/FjMRkN+m3ifMRW6 JyxWXXNpXGz4jOkoRXyzt9y1i5bG4nS7do3F972+gXLL8UgGgkEkpY6g7zEsH0T5CLEW GK45GooFAGhT5FZ9cV1pcHCi2C0GgAHmSjuguUhXk/KUiL2Ge7yUY5OS4drm9lvClDkm shUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7gvD+eDtxHWqWm0I2JlMrE06qVaL088iwj9Q2nGSlMA=; b=UysGYnqpKWgMmRWrbjBZA0j+vTzSbBRSo4Rpyc6gRZCCXsU8XEVfdH+nmRHDvU8m+j XG8HSSpDP5ZGMs/jYh/JrBPjVcHyxY6ODR649PoqYmXD5HNt3cmM5yyG+VpCPkkyrEJ1 GpsYRqRgvUxqfsZ/4RqvcjTIjDiwbhmVjr6nB+Jg0msredenn3RC6kSFa6NkYTgXiTDc 1slNU3ersHVP70KZmRTxVpNI7btTn1uQq2l799fzoTZguHdiAuqkstraFcJ7Ycom2/bX v8UCoE2An2mW8atecs67eFgQKJWoxNQbNEx/B5JOFBmqOY0MN32CBApkDdXRjb4wtON3 o8Tg== X-Gm-Message-State: AOAM531dfwsDoyBfoe5OJNIzzyz0wKZv2fchKmGdjAo69Y5cEhVt8lXZ Y8WjSN53CksZX+vVOw5amJsvB02tNTk= X-Google-Smtp-Source: ABdhPJx6wgqBb1mOKbP9OJhhXMcQcQ/ePlPtSMJheiJ4LJ4zhlY+r3nUhVU57BO2pZ4tbS4n5ueABA== X-Received: by 2002:a9d:65d5:: with SMTP id z21mr16775978oth.229.1623126374867; Mon, 07 Jun 2021 21:26:14 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-cb25-4f27-0965-41cc.res6.spectrum.com. [2603:8081:140c:1a00:cb25:4f27:965:41cc]) by smtp.gmail.com with ESMTPSA id t63sm2322910oih.31.2021.06.07.21.26.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Jun 2021 21:26:14 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, monis@mellanox.com, linux-rdma@vger.kernel.org Cc: Bob Pearson , kernel test robot Subject: [PATCH for-next v9 04/10] RDMA/rxe: Add ib_alloc_mw and ib_dealloc_mw verbs Date: Mon, 7 Jun 2021 23:25:47 -0500 Message-Id: <20210608042552.33275-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210608042552.33275-1-rpearsonhpe@gmail.com> References: <20210608042552.33275-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add ib_alloc_mw and ib_dealloc_mw verbs APIs. Added new file rxe_mw.c focused on MWs. Changed the 8 bit random key generator. Added a cleanup routine for MWs. Added verbs routines to ib_device_ops. Signed-off-by: Bob Pearson --- v7: Removed duplicate INIT_RDMA_OBJ_SIZE(ib_mw, ...) in rxe_verbs.c. This was already added in patch 03/10. Reported-by: Jason Gunthorp Reported-by: kernel test robot --- drivers/infiniband/sw/rxe/Makefile | 1 + drivers/infiniband/sw/rxe/rxe_loc.h | 6 +++ drivers/infiniband/sw/rxe/rxe_mr.c | 20 +++++----- drivers/infiniband/sw/rxe/rxe_mw.c | 53 +++++++++++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_pool.c | 1 + drivers/infiniband/sw/rxe/rxe_verbs.c | 2 + drivers/infiniband/sw/rxe/rxe_verbs.h | 2 + 7 files changed, 74 insertions(+), 11 deletions(-) create mode 100644 drivers/infiniband/sw/rxe/rxe_mw.c diff --git a/drivers/infiniband/sw/rxe/Makefile b/drivers/infiniband/sw/rxe/Makefile index 66af72dca759..1e24673e9318 100644 --- a/drivers/infiniband/sw/rxe/Makefile +++ b/drivers/infiniband/sw/rxe/Makefile @@ -15,6 +15,7 @@ rdma_rxe-y := \ rxe_qp.o \ rxe_cq.o \ rxe_mr.o \ + rxe_mw.o \ rxe_opcode.o \ rxe_mmap.o \ rxe_icrc.o \ diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index b21038cb370f..422b9481d5f6 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -76,6 +76,7 @@ enum copy_direction { from_mr_obj, }; +u8 rxe_get_next_key(u32 last_key); void rxe_mr_init_dma(struct rxe_pd *pd, int access, struct rxe_mr *mr); int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova, @@ -106,6 +107,11 @@ void rxe_mr_cleanup(struct rxe_pool_entry *arg); int advance_dma_data(struct rxe_dma_info *dma, unsigned int length); +/* rxe_mw.c */ +int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata); +int rxe_dealloc_mw(struct ib_mw *ibmw); +void rxe_mw_cleanup(struct rxe_pool_entry *arg); + /* rxe_net.c */ void rxe_loopback(struct sk_buff *skb); int rxe_send(struct rxe_pkt_info *pkt, struct sk_buff *skb); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 373b46aab043..cfd35a442c10 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -7,19 +7,17 @@ #include "rxe.h" #include "rxe_loc.h" -/* - * lfsr (linear feedback shift register) with period 255 +/* Return a random 8 bit key value that is + * different than the last_key. Set last_key to -1 + * if this is the first key for an MR or MW */ -static u8 rxe_get_key(void) +u8 rxe_get_next_key(u32 last_key) { - static u32 key = 1; - - key = key << 1; - - key |= (0 != (key & 0x100)) ^ (0 != (key & 0x10)) - ^ (0 != (key & 0x80)) ^ (0 != (key & 0x40)); + u8 key; - key &= 0xff; + do { + get_random_bytes(&key, 1); + } while (key == last_key); return key; } @@ -47,7 +45,7 @@ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length) static void rxe_mr_init(int access, struct rxe_mr *mr) { - u32 lkey = mr->pelem.index << 8 | rxe_get_key(); + u32 lkey = mr->pelem.index << 8 | rxe_get_next_key(-1); u32 rkey = (access & IB_ACCESS_REMOTE) ? lkey : 0; mr->ibmr.lkey = lkey; diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c new file mode 100644 index 000000000000..69128e298d44 --- /dev/null +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -0,0 +1,53 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright (c) 2020 Hewlett Packard Enterprise, Inc. All rights reserved. + */ + +#include "rxe.h" + +int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) +{ + struct rxe_mw *mw = to_rmw(ibmw); + struct rxe_pd *pd = to_rpd(ibmw->pd); + struct rxe_dev *rxe = to_rdev(ibmw->device); + int ret; + + rxe_add_ref(pd); + + ret = rxe_add_to_pool(&rxe->mw_pool, mw); + if (ret) { + rxe_drop_ref(pd); + return ret; + } + + rxe_add_index(mw); + ibmw->rkey = (mw->pelem.index << 8) | rxe_get_next_key(-1); + mw->state = (mw->ibmw.type == IB_MW_TYPE_2) ? + RXE_MW_STATE_FREE : RXE_MW_STATE_VALID; + spin_lock_init(&mw->lock); + + return 0; +} + +int rxe_dealloc_mw(struct ib_mw *ibmw) +{ + struct rxe_mw *mw = to_rmw(ibmw); + struct rxe_pd *pd = to_rpd(ibmw->pd); + unsigned long flags; + + spin_lock_irqsave(&mw->lock, flags); + mw->state = RXE_MW_STATE_INVALID; + spin_unlock_irqrestore(&mw->lock, flags); + + rxe_drop_ref(mw); + rxe_drop_ref(pd); + + return 0; +} + +void rxe_mw_cleanup(struct rxe_pool_entry *elem) +{ + struct rxe_mw *mw = container_of(elem, typeof(*mw), pelem); + + rxe_drop_index(mw); +} diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index df0bec719341..0b8e7c6255a2 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -65,6 +65,7 @@ struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { .name = "rxe-mw", .size = sizeof(struct rxe_mw), .elem_offset = offsetof(struct rxe_mw, pelem), + .cleanup = rxe_mw_cleanup, .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, .max_index = RXE_MAX_MW_INDEX, .min_index = RXE_MIN_MW_INDEX, diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 86a0965a88f6..552a1ea9c8b7 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -1060,6 +1060,7 @@ static const struct ib_device_ops rxe_dev_ops = { .alloc_hw_stats = rxe_ib_alloc_hw_stats, .alloc_mr = rxe_alloc_mr, + .alloc_mw = rxe_alloc_mw, .alloc_pd = rxe_alloc_pd, .alloc_ucontext = rxe_alloc_ucontext, .attach_mcast = rxe_attach_mcast, @@ -1069,6 +1070,7 @@ static const struct ib_device_ops rxe_dev_ops = { .create_srq = rxe_create_srq, .create_user_ah = rxe_create_ah, .dealloc_driver = rxe_dealloc, + .dealloc_mw = rxe_dealloc_mw, .dealloc_pd = rxe_dealloc_pd, .dealloc_ucontext = rxe_dealloc_ucontext, .dereg_mr = rxe_dereg_mr, diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 8d32e3f50813..c8597ae8c833 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -323,6 +323,8 @@ enum rxe_mw_state { struct rxe_mw { struct ib_mw ibmw; struct rxe_pool_entry pelem; + spinlock_t lock; + enum rxe_mw_state state; }; struct rxe_mc_grp { From patchwork Tue Jun 8 04:25:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12305343 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88826C47082 for ; Tue, 8 Jun 2021 04:26:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5E11461249 for ; Tue, 8 Jun 2021 04:26:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229507AbhFHE2J (ORCPT ); Tue, 8 Jun 2021 00:28:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229451AbhFHE2I (ORCPT ); Tue, 8 Jun 2021 00:28:08 -0400 Received: from mail-oi1-x231.google.com (mail-oi1-x231.google.com [IPv6:2607:f8b0:4864:20::231]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C281C061789 for ; Mon, 7 Jun 2021 21:26:16 -0700 (PDT) Received: by mail-oi1-x231.google.com with SMTP id u11so20357404oiv.1 for ; Mon, 07 Jun 2021 21:26:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=U5rPLEnmJZPFrJiSUM0o7pDJxLnxJTWiGzt2ZHYvHDE=; b=Q66JVqYVKnfHFsA2Qzl6o4uk88+v0SQPBTsxEInZ8nllhVl5DvOiaxv2QI0ypUMkcG lvNfCcufia44/sZL43wK8SdqD5DHhms7IWdgnQA1z3LdOmxfqC8aXDWv1xOLlkDyTt0Z kT4d4t60h08yJ6XHuf9j69xCqmiN62CaTrnTs0Nd8kgRrHaSZGaQFAIwOAyMLGAdm2lQ JOiyy9MkJH4aFupuIkilboddPv6KxVc2vmwJqnLctsWgJO8go+mXK0Cru4K5ZlPcuCKW KSL0imFcHpnhloc0XeXAEH2mvC5eHfWroTgAcYRzlv3oleDpoDsv0MtTQfzQuRuUu/Db 2xCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=U5rPLEnmJZPFrJiSUM0o7pDJxLnxJTWiGzt2ZHYvHDE=; b=R0kCh9vqhNVXAe4KhvSPUSKlP6WIp2AGH/CQ2ymCGJunOkkWieqyzDNNtxDUtdiI44 41h7RshSXGJlrQ+nEKpjvIMOoducAwg+UFg9O1ybDc1MvzaqA8niN62ftcpG/5Rgik1l kATfehT+nkaRcyrvt2laLu6q8gFKuhIOlNkq4FjUNZhGAAyYYow+AfE8lDya6Vn6z+mt RLoZ/L6j4A3COPPkWDLDzw/26J/aQQYpRbfNwuIEIgnvSAgOR6SIukX6UmOKhMc1FrSr +Q6H4RQZfw2R7gMTAuDKCvqvsDhiCCDO+3kEL9/fA1LpXVAmE3MlndpjPhv7BM3HpQpV dpBA== X-Gm-Message-State: AOAM531P7oKDjoaADxFx5UTIlpMW9eXIe2ralohz8mM7n+ewQlwxAS24 QSGYCB0PpL5ibQkIWGgpZLM= X-Google-Smtp-Source: ABdhPJwj/dfJvzTTxtYNS6BryeviLcVBD4vTGM5c7OKixgaHHITrM5/7pzKBSf8D3Z9aOqK/0v5e2A== X-Received: by 2002:aca:c60c:: with SMTP id w12mr1617264oif.46.1623126375643; Mon, 07 Jun 2021 21:26:15 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-cb25-4f27-0965-41cc.res6.spectrum.com. [2603:8081:140c:1a00:cb25:4f27:965:41cc]) by smtp.gmail.com with ESMTPSA id s187sm2683886oig.6.2021.06.07.21.26.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Jun 2021 21:26:15 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, monis@mellanox.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v9 05/10] RDMA/rxe: Replace WR_REG_MASK by WR_LOCAL_OP_MASK Date: Mon, 7 Jun 2021 23:25:48 -0500 Message-Id: <20210608042552.33275-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210608042552.33275-1-rpearsonhpe@gmail.com> References: <20210608042552.33275-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Rxe has two mask bits WR_LOCAL_MASK and WR_REG_MASK with WR_REG_MASK used to indicate any local operation and WR_LOCAL_MASK unused. This patch replaces both of these with one mask bit WR_LOCAL_OP_MASK which is clearer. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_opcode.c | 4 ++-- drivers/infiniband/sw/rxe/rxe_opcode.h | 3 +-- drivers/infiniband/sw/rxe/rxe_req.c | 2 +- drivers/infiniband/sw/rxe/rxe_verbs.c | 2 +- 4 files changed, 5 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c index 0cb4b01fd910..1e4b67b048f3 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.c +++ b/drivers/infiniband/sw/rxe/rxe_opcode.c @@ -87,13 +87,13 @@ struct rxe_wr_opcode_info rxe_wr_opcode_info[] = { [IB_WR_LOCAL_INV] = { .name = "IB_WR_LOCAL_INV", .mask = { - [IB_QPT_RC] = WR_REG_MASK, + [IB_QPT_RC] = WR_LOCAL_OP_MASK, }, }, [IB_WR_REG_MR] = { .name = "IB_WR_REG_MR", .mask = { - [IB_QPT_RC] = WR_REG_MASK, + [IB_QPT_RC] = WR_LOCAL_OP_MASK, }, }, }; diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.h b/drivers/infiniband/sw/rxe/rxe_opcode.h index 1041ac9a9233..e02f039b8c44 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.h +++ b/drivers/infiniband/sw/rxe/rxe_opcode.h @@ -19,8 +19,7 @@ enum rxe_wr_mask { WR_SEND_MASK = BIT(2), WR_READ_MASK = BIT(3), WR_WRITE_MASK = BIT(4), - WR_LOCAL_MASK = BIT(5), - WR_REG_MASK = BIT(6), + WR_LOCAL_OP_MASK = BIT(5), WR_READ_OR_WRITE_MASK = WR_READ_MASK | WR_WRITE_MASK, WR_READ_WRITE_OR_SEND_MASK = WR_READ_OR_WRITE_MASK | WR_SEND_MASK, diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 3664cdae7e1f..0d4dcd514c55 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -593,7 +593,7 @@ int rxe_requester(void *arg) if (unlikely(!wqe)) goto exit; - if (wqe->mask & WR_REG_MASK) { + if (wqe->mask & WR_LOCAL_OP_MASK) { if (wqe->wr.opcode == IB_WR_LOCAL_INV) { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct rxe_mr *rmr; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 552a1ea9c8b7..4860e8ab378e 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -577,7 +577,7 @@ static void init_send_wqe(struct rxe_qp *qp, const struct ib_send_wr *ibwr, init_send_wr(qp, &wqe->wr, ibwr); /* local operation */ - if (unlikely(mask & WR_REG_MASK)) { + if (unlikely(mask & WR_LOCAL_OP_MASK)) { wqe->mask = mask; wqe->state = wqe_state_posted; return; From patchwork Tue Jun 8 04:25:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12305351 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D96E1C4743E for ; Tue, 8 Jun 2021 04:26:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B405F600EF for ; Tue, 8 Jun 2021 04:26:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230223AbhFHE22 (ORCPT ); Tue, 8 Jun 2021 00:28:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39300 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230251AbhFHE2Z (ORCPT ); Tue, 8 Jun 2021 00:28:25 -0400 Received: from mail-oi1-x234.google.com (mail-oi1-x234.google.com [IPv6:2607:f8b0:4864:20::234]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE89DC061789 for ; Mon, 7 Jun 2021 21:26:18 -0700 (PDT) Received: by mail-oi1-x234.google.com with SMTP id u11so20357450oiv.1 for ; Mon, 07 Jun 2021 21:26:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MweJodyMfd5S1X8O7Yk0+lyl4lshU779R7fjC0A5hks=; b=TaxE3tmpyIYKLud0qjIwfxa3Ck9bbH17TVZCHGd1RaRWTFCiYMK76Wlo7oUbsfNPBd 7p+jB7hj1v00/Xb3YpTwI9Yihwkt8QdddUP5W9lH+HlubYDY9mrQlkZqo/whGaeGZelo q1769cL04QPMcX1op2VeKX1z1VX5DMs4SQYHotL9PTYY/UlMaH7CAIob6R5yPifptgfx 2dP5PL/SKtuYUJAdRBhidKZlqcy1bZMwTIKAy0JRFG6WcPwVi4PrAGiEVLyXA+JCLYmz AE9iZZZhw+bAUlUbklCAGq3mo5zcRilpKYsOKyLFpPhyxd/t/LUGztOVJESbdkwcdD0V x8yA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MweJodyMfd5S1X8O7Yk0+lyl4lshU779R7fjC0A5hks=; b=dl07vP0okk32E4k1TuWO/8dC2H8gcHhiEtPP9C+WAiblAKQov15FBo/Up8jiL7icEX 4wVq9agVounIVr/n9kBzEH34+zSSg8gUvNZdqDv3+hN5JE4oEV4xXivMV2kM95toxyu9 o4hsLhC4dy1gK2gNSSTBQ6Ndzl7O5hn7TSfi415op3CYTMGQY3r8ZlyXAg/sQ55iTm5H Y+RyPcw1uZ6TCZNoPKecfJdermgsmFCjtw1bWpW4ApoUYxLSKBW0h8vQpJ4/mVlfMuJN JVy7/+/VKpirdC12NHV498uGmnyXrDuQHF5YG1BNQ4KAM5XBa/YWeVtc+Qyfe4wZqcq6 1BUQ== X-Gm-Message-State: AOAM532Dq+bg1qhxTM5OwS1s0lDlE2Rv+1yV9mkRHyuSQDZ4WsUZhL/x zXBfCRyJck7tjjUu9Aluuio= X-Google-Smtp-Source: ABdhPJzsuADCbagPnQvbMsD7vXmJrcnvedjFouDC4zsFKERKN7gYZXHpJ30mcNu3wdeBtw+qr9YXzg== X-Received: by 2002:aca:aad2:: with SMTP id t201mr1607866oie.117.1623126376499; Mon, 07 Jun 2021 21:26:16 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-cb25-4f27-0965-41cc.res6.spectrum.com. [2603:8081:140c:1a00:cb25:4f27:965:41cc]) by smtp.gmail.com with ESMTPSA id b8sm2869815ots.6.2021.06.07.21.26.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Jun 2021 21:26:16 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, monis@mellanox.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v9 06/10] RDMA/rxe: Move local ops to subroutine Date: Mon, 7 Jun 2021 23:25:49 -0500 Message-Id: <20210608042552.33275-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210608042552.33275-1-rpearsonhpe@gmail.com> References: <20210608042552.33275-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Simplify rxe_requester() by moving the local operations to a subroutine. Add an error return for illegal send WR opcode. Moved next_index ahead of rxe_run_task which fixed a small bug where work completions were delayed until after the next wqe which was not the intended behavior. Let errors return their own WC status. Previously all errors were reported as protection errors which was incorrect. Changed the return of errors from rxe_do_local_ops() to err: which causes an immediate completion. Without this an error on a last WR may get lost. Changed fill_packet() to finish_packet() which is more accurate. Fixes: 8700e2e7c485 ("The software RoCE driver") Signed-off-by: Bob Pearson --- v9: Fixed long standing incorrect WC status values and error returns. --- drivers/infiniband/sw/rxe/rxe_req.c | 103 +++++++++++++++++----------- 1 file changed, 63 insertions(+), 40 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 889c7fea6f18..80872ec54219 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -462,7 +462,7 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, return skb; } -static int fill_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe, +static int finish_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe, struct rxe_pkt_info *pkt, struct sk_buff *skb, int paylen) { @@ -578,6 +578,54 @@ static void update_state(struct rxe_qp *qp, struct rxe_send_wqe *wqe, jiffies + qp->qp_timeout_jiffies); } +static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe) +{ + u8 opcode = wqe->wr.opcode; + struct rxe_dev *rxe; + struct rxe_mr *mr; + u32 rkey; + + switch (opcode) { + case IB_WR_LOCAL_INV: + rxe = to_rdev(qp->ibqp.device); + rkey = wqe->wr.ex.invalidate_rkey; + mr = rxe_pool_get_index(&rxe->mr_pool, rkey >> 8); + if (!mr) { + pr_err("No MR for rkey %#x\n", rkey); + wqe->status = IB_WC_LOC_QP_OP_ERR; + return -EINVAL; + } + mr->state = RXE_MR_STATE_FREE; + rxe_drop_ref(mr); + break; + case IB_WR_REG_MR: + mr = to_rmr(wqe->wr.wr.reg.mr); + + rxe_add_ref(mr); + mr->state = RXE_MR_STATE_VALID; + mr->access = wqe->wr.wr.reg.access; + mr->ibmr.lkey = wqe->wr.wr.reg.key; + mr->ibmr.rkey = wqe->wr.wr.reg.key; + mr->iova = wqe->wr.wr.reg.mr->iova; + rxe_drop_ref(mr); + break; + default: + pr_err("Unexpected send wqe opcode %d\n", opcode); + wqe->status = IB_WC_LOC_QP_OP_ERR; + return -EINVAL; + } + + wqe->state = wqe_state_done; + wqe->status = IB_WC_SUCCESS; + qp->req.wqe_index = next_index(qp->sq.queue, qp->req.wqe_index); + + if ((wqe->wr.send_flags & IB_SEND_SIGNALED) || + qp->sq_sig_type == IB_SIGNAL_ALL_WR) + rxe_run_task(&qp->comp.task, 1); + + return 0; +} + int rxe_requester(void *arg) { struct rxe_qp *qp = (struct rxe_qp *)arg; @@ -618,42 +666,11 @@ int rxe_requester(void *arg) goto exit; if (wqe->mask & WR_LOCAL_OP_MASK) { - if (wqe->wr.opcode == IB_WR_LOCAL_INV) { - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); - struct rxe_mr *rmr; - - rmr = rxe_pool_get_index(&rxe->mr_pool, - wqe->wr.ex.invalidate_rkey >> 8); - if (!rmr) { - pr_err("No mr for key %#x\n", - wqe->wr.ex.invalidate_rkey); - wqe->state = wqe_state_error; - wqe->status = IB_WC_MW_BIND_ERR; - goto exit; - } - rmr->state = RXE_MR_STATE_FREE; - rxe_drop_ref(rmr); - wqe->state = wqe_state_done; - wqe->status = IB_WC_SUCCESS; - } else if (wqe->wr.opcode == IB_WR_REG_MR) { - struct rxe_mr *rmr = to_rmr(wqe->wr.wr.reg.mr); - - rmr->state = RXE_MR_STATE_VALID; - rmr->access = wqe->wr.wr.reg.access; - rmr->ibmr.lkey = wqe->wr.wr.reg.key; - rmr->ibmr.rkey = wqe->wr.wr.reg.key; - rmr->iova = wqe->wr.wr.reg.mr->iova; - wqe->state = wqe_state_done; - wqe->status = IB_WC_SUCCESS; - } else { - goto exit; - } - if ((wqe->wr.send_flags & IB_SEND_SIGNALED) || - qp->sq_sig_type == IB_SIGNAL_ALL_WR) - rxe_run_task(&qp->comp.task, 1); - qp->req.wqe_index = next_index(qp->sq.queue, - qp->req.wqe_index); - goto next_wqe; + ret = rxe_do_local_ops(qp, wqe); + if (unlikely(ret)) + goto err; + else + goto next_wqe; } if (unlikely(qp_type(qp) == IB_QPT_RC && @@ -711,11 +728,17 @@ int rxe_requester(void *arg) skb = init_req_packet(qp, wqe, opcode, payload, &pkt); if (unlikely(!skb)) { pr_err("qp#%d Failed allocating skb\n", qp_num(qp)); + wqe->status = IB_WC_LOC_QP_OP_ERR; goto err; } - if (fill_packet(qp, wqe, &pkt, skb, payload)) { - pr_debug("qp#%d Error during fill packet\n", qp_num(qp)); + ret = finish_packet(qp, wqe, &pkt, skb, payload); + if (unlikely(ret)) { + pr_debug("qp#%d Error during finish packet\n", qp_num(qp)); + if (ret == -EFAULT) + wqe->status = IB_WC_LOC_PROT_ERR; + else + wqe->status = IB_WC_LOC_QP_OP_ERR; kfree_skb(skb); goto err; } @@ -740,6 +763,7 @@ int rxe_requester(void *arg) goto exit; } + wqe->status = IB_WC_LOC_QP_OP_ERR; goto err; } @@ -748,7 +772,6 @@ int rxe_requester(void *arg) goto next_wqe; err: - wqe->status = IB_WC_LOC_PROT_ERR; wqe->state = wqe_state_error; __rxe_do_task(&qp->comp.task); From patchwork Tue Jun 8 04:25:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12305363 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F3E8C4743D for ; Tue, 8 Jun 2021 04:27:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ECF8360240 for ; Tue, 8 Jun 2021 04:27:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230251AbhFHE3X (ORCPT ); Tue, 8 Jun 2021 00:29:23 -0400 Received: from mail-ot1-f54.google.com ([209.85.210.54]:40762 "EHLO mail-ot1-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229462AbhFHE3X (ORCPT ); Tue, 8 Jun 2021 00:29:23 -0400 Received: by mail-ot1-f54.google.com with SMTP id c31-20020a056830349fb02903a5bfa6138bso19062450otu.7 for ; Mon, 07 Jun 2021 21:27:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=x79LBNecSFH6i9+OWuHb1JpoGoEyheMjS2mhGoy755Q=; b=dvGbHASdc/w0rFdbQluQVyr69SSMwdpI9u7RZmNQV6pCxLgPq7vd04nLuva0+ifaDS x2UXHiEGp6JL7hCOvVBdRttsjjKWtoJaPBk0Ry9OVlglTT2NJJ52zixmypSh/GIFBYnA phvB6UgVPaTvMXhyr91yPOBYmtqntFolARqYbed3Fd/cb7O615ntoMRwtFvkLBsZmwWG YE1uPK3GW6WTR7AZJut1pBIGGVZWfZRbp9n9r1Hfh1RDg3jExVPYgYO9SeSS4nl40w+P I6mdL7zOysgI2gUaToUaadIb7u8eX3DGNvWzFmgcI3vxKyLtYYT9oqF3g9E4y4Vo46Sy eZEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=x79LBNecSFH6i9+OWuHb1JpoGoEyheMjS2mhGoy755Q=; b=K2lLTS86/jy9hKwoGju1XaZVRnkM04M7QhA+v5hhhY8Nq2XYxhw6MhCPcQ+5RJ+00/ U60WKMYEuMEnbw0A27efd3TBF6jqo6Er14T7upVVcT0hZ/4AVRphWATyz7c6ahK374vo HiXPMFL8hv0G9TzJ80fFjgA4tSXrqr/lMH/USDRxqlTa4R58C+VCdSUACt5fUmzGLnya uM8qIj8QRIm6JQGiTGynOfPtX5um71uKROpFwN9MxpERkNSCo8ztCx778X6X0xdKbbex wcEWkiKUqOySp3SCwRi0TBELcBYb3AnC/a6Yz9G0e8gRBtAYbqEPIs3vMGkZvBJs/2dp 2U7w== X-Gm-Message-State: AOAM53316LS1qEwkBKBxrjDaI9MesRvY6+UXqx7cp8CkAuKrvLYjPEuM jVftS7c3Rc6Gp4FCPMc56ZplAvPZIt0= X-Google-Smtp-Source: ABdhPJwErxBDNMM8ztbIQZmoiX1U+EjYCjg7l31WNuV4pP9PTmEBwSUZsN6kHTZRntxgC0hdL1cDUg== X-Received: by 2002:a9d:475:: with SMTP id 108mr15513366otc.69.1623126377277; Mon, 07 Jun 2021 21:26:17 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-cb25-4f27-0965-41cc.res6.spectrum.com. [2603:8081:140c:1a00:cb25:4f27:965:41cc]) by smtp.gmail.com with ESMTPSA id v14sm561428ote.15.2021.06.07.21.26.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Jun 2021 21:26:16 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, monis@mellanox.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v9 07/10] RDMA/rxe: Add support for bind MW work requests Date: Mon, 7 Jun 2021 23:25:50 -0500 Message-Id: <20210608042552.33275-8-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210608042552.33275-1-rpearsonhpe@gmail.com> References: <20210608042552.33275-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add support for bind MW work requests from user space. Since rdma/core does not support bind mw in ib_send_wr there is no way to support bind mw in kernel space. Added bind_mw local operation in rxe_req.c. Added bind_mw WR operation in rxe_opcode.c. Added bind_mw WC in rxe_comp.c. Added additional fields to rxe_mw in rxe_verbs.h. Added rxe_do_dealloc_mw() subroutine to cleanup an mw when rxe_dealloc_mw is called. Added code to implement bind_mw operation in rxe_mw.c Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 1 + drivers/infiniband/sw/rxe/rxe_loc.h | 1 + drivers/infiniband/sw/rxe/rxe_mw.c | 202 ++++++++++++++++++++++++- drivers/infiniband/sw/rxe/rxe_opcode.c | 7 + drivers/infiniband/sw/rxe/rxe_req.c | 8 + drivers/infiniband/sw/rxe/rxe_verbs.h | 15 +- 6 files changed, 229 insertions(+), 5 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index 32e587c47637..02bc93e186cc 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -103,6 +103,7 @@ static enum ib_wc_opcode wr_to_wc_opcode(enum ib_wr_opcode opcode) case IB_WR_RDMA_READ_WITH_INV: return IB_WC_RDMA_READ; case IB_WR_LOCAL_INV: return IB_WC_LOCAL_INV; case IB_WR_REG_MR: return IB_WC_REG_MR; + case IB_WR_BIND_MW: return IB_WC_BIND_MW; default: return 0xff; diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 422b9481d5f6..e97048e77451 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -110,6 +110,7 @@ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length); /* rxe_mw.c */ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata); int rxe_dealloc_mw(struct ib_mw *ibmw); +int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe); void rxe_mw_cleanup(struct rxe_pool_entry *arg); /* rxe_net.c */ diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 69128e298d44..65215dde9974 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -29,6 +29,29 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) return 0; } +static void rxe_do_dealloc_mw(struct rxe_mw *mw) +{ + if (mw->mr) { + struct rxe_mr *mr = mw->mr; + + mw->mr = NULL; + atomic_dec(&mr->num_mw); + rxe_drop_ref(mr); + } + + if (mw->qp) { + struct rxe_qp *qp = mw->qp; + + mw->qp = NULL; + rxe_drop_ref(qp); + } + + mw->access = 0; + mw->addr = 0; + mw->length = 0; + mw->state = RXE_MW_STATE_INVALID; +} + int rxe_dealloc_mw(struct ib_mw *ibmw) { struct rxe_mw *mw = to_rmw(ibmw); @@ -36,7 +59,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw) unsigned long flags; spin_lock_irqsave(&mw->lock, flags); - mw->state = RXE_MW_STATE_INVALID; + rxe_do_dealloc_mw(mw); spin_unlock_irqrestore(&mw->lock, flags); rxe_drop_ref(mw); @@ -45,6 +68,183 @@ int rxe_dealloc_mw(struct ib_mw *ibmw) return 0; } +static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe, + struct rxe_mw *mw, struct rxe_mr *mr) +{ + if (mw->ibmw.type == IB_MW_TYPE_1) { + if (unlikely(mw->state != RXE_MW_STATE_VALID)) { + pr_err_once( + "attempt to bind a type 1 MW not in the valid state\n"); + return -EINVAL; + } + + /* o10-36.2.2 */ + if (unlikely((mw->access & IB_ZERO_BASED))) { + pr_err_once("attempt to bind a zero based type 1 MW\n"); + return -EINVAL; + } + } + + if (mw->ibmw.type == IB_MW_TYPE_2) { + /* o10-37.2.30 */ + if (unlikely(mw->state != RXE_MW_STATE_FREE)) { + pr_err_once( + "attempt to bind a type 2 MW not in the free state\n"); + return -EINVAL; + } + + /* C10-72 */ + if (unlikely(qp->pd != to_rpd(mw->ibmw.pd))) { + pr_err_once( + "attempt to bind type 2 MW with qp with different PD\n"); + return -EINVAL; + } + + /* o10-37.2.40 */ + if (unlikely(!mr || wqe->wr.wr.mw.length == 0)) { + pr_err_once( + "attempt to invalidate type 2 MW by binding with NULL or zero length MR\n"); + return -EINVAL; + } + } + + if (unlikely((wqe->wr.wr.mw.rkey & 0xff) == (mw->ibmw.rkey & 0xff))) { + pr_err_once("attempt to bind MW with same key\n"); + return -EINVAL; + } + + /* remaining checks only apply to a nonzero MR */ + if (!mr) + return 0; + + if (unlikely(mr->access & IB_ZERO_BASED)) { + pr_err_once("attempt to bind MW to zero based MR\n"); + return -EINVAL; + } + + /* C10-73 */ + if (unlikely(!(mr->access & IB_ACCESS_MW_BIND))) { + pr_err_once( + "attempt to bind an MW to an MR without bind access\n"); + return -EINVAL; + } + + /* C10-74 */ + if (unlikely((mw->access & + (IB_ACCESS_REMOTE_WRITE | IB_ACCESS_REMOTE_ATOMIC)) && + !(mr->access & IB_ACCESS_LOCAL_WRITE))) { + pr_err_once( + "attempt to bind an writeable MW to an MR without local write access\n"); + return -EINVAL; + } + + /* C10-75 */ + if (mw->access & IB_ZERO_BASED) { + if (unlikely(wqe->wr.wr.mw.length > mr->length)) { + pr_err_once( + "attempt to bind a ZB MW outside of the MR\n"); + return -EINVAL; + } + } else { + if (unlikely((wqe->wr.wr.mw.addr < mr->iova) || + ((wqe->wr.wr.mw.addr + wqe->wr.wr.mw.length) > + (mr->iova + mr->length)))) { + pr_err_once( + "attempt to bind a VA MW outside of the MR\n"); + return -EINVAL; + } + } + + return 0; +} + +static void rxe_do_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe, + struct rxe_mw *mw, struct rxe_mr *mr) +{ + u32 rkey; + u32 new_rkey; + + rkey = mw->ibmw.rkey; + new_rkey = (rkey & 0xffffff00) | (wqe->wr.wr.mw.rkey & 0x000000ff); + + mw->ibmw.rkey = new_rkey; + mw->access = wqe->wr.wr.mw.access; + mw->state = RXE_MW_STATE_VALID; + mw->addr = wqe->wr.wr.mw.addr; + mw->length = wqe->wr.wr.mw.length; + + if (mw->mr) { + rxe_drop_ref(mw->mr); + atomic_dec(&mw->mr->num_mw); + mw->mr = NULL; + } + + if (mw->length) { + mw->mr = mr; + atomic_inc(&mr->num_mw); + rxe_add_ref(mr); + } + + if (mw->ibmw.type == IB_MW_TYPE_2) { + rxe_add_ref(qp); + mw->qp = qp; + } +} + +int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe) +{ + int ret; + struct rxe_mw *mw; + struct rxe_mr *mr; + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + unsigned long flags; + + mw = rxe_pool_get_index(&rxe->mw_pool, + wqe->wr.wr.mw.mw_rkey >> 8); + if (unlikely(!mw)) { + ret = -EINVAL; + goto err; + } + + if (unlikely(mw->ibmw.rkey != wqe->wr.wr.mw.mw_rkey)) { + ret = -EINVAL; + goto err_drop_mw; + } + + if (likely(wqe->wr.wr.mw.length)) { + mr = rxe_pool_get_index(&rxe->mr_pool, + wqe->wr.wr.mw.mr_lkey >> 8); + if (unlikely(!mr)) { + ret = -EINVAL; + goto err_drop_mw; + } + + if (unlikely(mr->ibmr.lkey != wqe->wr.wr.mw.mr_lkey)) { + ret = -EINVAL; + goto err_drop_mr; + } + } else { + mr = NULL; + } + + spin_lock_irqsave(&mw->lock, flags); + + ret = rxe_check_bind_mw(qp, wqe, mw, mr); + if (ret) + goto err_unlock; + + rxe_do_bind_mw(qp, wqe, mw, mr); +err_unlock: + spin_unlock_irqrestore(&mw->lock, flags); +err_drop_mr: + if (mr) + rxe_drop_ref(mr); +err_drop_mw: + rxe_drop_ref(mw); +err: + return ret; +} + void rxe_mw_cleanup(struct rxe_pool_entry *elem) { struct rxe_mw *mw = container_of(elem, typeof(*mw), pelem); diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c index 1e4b67b048f3..3ef5a10a6efd 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.c +++ b/drivers/infiniband/sw/rxe/rxe_opcode.c @@ -96,6 +96,13 @@ struct rxe_wr_opcode_info rxe_wr_opcode_info[] = { [IB_QPT_RC] = WR_LOCAL_OP_MASK, }, }, + [IB_WR_BIND_MW] = { + .name = "IB_WR_BIND_MW", + .mask = { + [IB_QPT_RC] = WR_LOCAL_OP_MASK, + [IB_QPT_UC] = WR_LOCAL_OP_MASK, + }, + }, }; struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 80872ec54219..6583f8ca95dc 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -584,6 +584,7 @@ static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe) struct rxe_dev *rxe; struct rxe_mr *mr; u32 rkey; + int ret; switch (opcode) { case IB_WR_LOCAL_INV: @@ -609,6 +610,13 @@ static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe) mr->iova = wqe->wr.wr.reg.mr->iova; rxe_drop_ref(mr); break; + case IB_WR_BIND_MW: + ret = rxe_bind_mw(qp, wqe); + if (unlikely(ret)) { + wqe->status = IB_WC_MW_BIND_ERR; + return ret; + } + break; default: pr_err("Unexpected send wqe opcode %d\n", opcode); wqe->status = IB_WC_LOC_QP_OP_ERR; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 5effb12d22cc..3d0ab8b7804f 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -315,6 +315,8 @@ struct rxe_mr { u32 num_map; struct rxe_map **map; + + atomic_t num_mw; }; enum rxe_mw_state { @@ -324,10 +326,15 @@ enum rxe_mw_state { }; struct rxe_mw { - struct ib_mw ibmw; - struct rxe_pool_entry pelem; - spinlock_t lock; - enum rxe_mw_state state; + struct ib_mw ibmw; + struct rxe_pool_entry pelem; + spinlock_t lock; + enum rxe_mw_state state; + struct rxe_qp *qp; /* Type 2 only */ + struct rxe_mr *mr; + int access; + u64 addr; + u64 length; }; struct rxe_mc_grp { From patchwork Tue Jun 8 04:25:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12305349 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C339C48BCD for ; Tue, 8 Jun 2021 04:26:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E5CF860240 for ; Tue, 8 Jun 2021 04:26:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229526AbhFHE2Z (ORCPT ); Tue, 8 Jun 2021 00:28:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39306 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230190AbhFHE2Y (ORCPT ); Tue, 8 Jun 2021 00:28:24 -0400 Received: from mail-oo1-xc36.google.com (mail-oo1-xc36.google.com [IPv6:2607:f8b0:4864:20::c36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD4F2C061795 for ; Mon, 7 Jun 2021 21:26:19 -0700 (PDT) Received: by mail-oo1-xc36.google.com with SMTP id v17-20020a4aa5110000b0290249d63900faso1286822ook.0 for ; Mon, 07 Jun 2021 21:26:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UCXEcii86TnxHM+iTlFDpyBkTlzyeNdFA6xRxuTg7m0=; b=Xk3xbiAopBS7A2l+0wHi8WkzTVq+fIIpM8tbwrloyayW+APYADn1sh+XurBAmEIMe9 jGsjXu+Zd0lX6uLHn+U71twlovuRzcuOJWRU5DnZFaIYSlaWh5sg1qvwHE5LdhYd97s2 ujfgM+hC1n/U1Cshy1Nt8ptDFlR580oM9nQm8Er7LlXImZ0xVQ27mJMVvZWc1vyo5MoS RLEc6DLcXvYRxF07H32TYNGvzwiT/VBIIWkokDgHop+0NvCm3HMwyzS4kRf9wyQQfiZv qoj5dxs5IBY0qxdXkGEOvKgmi6sQJRcpIXUaHgiMxccqNxaaj5HCtgYE+PlZSjpYgblQ 4uOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UCXEcii86TnxHM+iTlFDpyBkTlzyeNdFA6xRxuTg7m0=; b=a311biqoMj1WXDtM7pGfrLoaNC6Cu6GZmZ+44BdlWz0K0csU0pOJXBWMam7aalipSN u67ZCHIfPdiNZyk+vNZnjCQQXzXKx48gSeK+dAW+/WoE/1inVhpOHvZLQVD/56lopWwu nFUZgjgcCyHJmIYySkLLOd0r3FWsF07CzOd7G4oxXc5EUxYpKGFspsgL7GuvcSznXpwS WdVHXda3Gaw2XnvCePfil/n4rWJWaytF3w7diTasLorWOcxYuW8BqXtUd8Y36QMlKEZe Y1lOcDkYuUaiLRhwaXcOvMyEly4CTJSMPaMkM1qIucaW3mrPbMw/I9MJ28PscY3qOHJS J5HQ== X-Gm-Message-State: AOAM532+a1TVB0PZTwZjt2lYdNk0A/woXIxq7hcV1JyZrG7cHM4g1KUI MPUhL+3UXUS3rQHIoqenu/MDPZTHg44= X-Google-Smtp-Source: ABdhPJx9yA/6kTFdbSA8x5/TGJEfcpQmisW3rQR04PWHAQ/VQ0nRQ53Ou8V+D5AMdBk5Mjkxo5M2xA== X-Received: by 2002:a4a:94ef:: with SMTP id l44mr15739329ooi.84.1623126378154; Mon, 07 Jun 2021 21:26:18 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-cb25-4f27-0965-41cc.res6.spectrum.com. [2603:8081:140c:1a00:cb25:4f27:965:41cc]) by smtp.gmail.com with ESMTPSA id w6sm2839176otj.5.2021.06.07.21.26.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Jun 2021 21:26:17 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, monis@mellanox.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v9 08/10] RDMA/rxe: Implement invalidate MW operations Date: Mon, 7 Jun 2021 23:25:51 -0500 Message-Id: <20210608042552.33275-9-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210608042552.33275-1-rpearsonhpe@gmail.com> References: <20210608042552.33275-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Implement invalidate MW and cleaned up invalidate MR operations. Added code to perform remote invalidate for send with invalidate. Added code to perform local invalidation. Deleted some blank lines in rxe_loc.h. Signed-off-by: Bob Pearson --- v6: Added rxe_ to subroutine names in lines that changed. v3: Replaced enums in lower case with upper case and moved to rxe_verbs.h which is where enums live. --- drivers/infiniband/sw/rxe/rxe_comp.c | 4 +- drivers/infiniband/sw/rxe/rxe_loc.h | 29 ++-------- drivers/infiniband/sw/rxe/rxe_mr.c | 81 ++++++++++++++++++--------- drivers/infiniband/sw/rxe/rxe_mw.c | 67 ++++++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_req.c | 18 +++--- drivers/infiniband/sw/rxe/rxe_resp.c | 60 ++++++++++++-------- drivers/infiniband/sw/rxe/rxe_verbs.h | 33 ++++++++--- 7 files changed, 199 insertions(+), 93 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index 02bc93e186cc..d4ceb81a96df 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -349,7 +349,7 @@ static inline enum comp_state do_read(struct rxe_qp *qp, ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &wqe->dma, payload_addr(pkt), - payload_size(pkt), to_mr_obj, NULL); + payload_size(pkt), RXE_TO_MR_OBJ, NULL); if (ret) return COMPST_ERROR; @@ -369,7 +369,7 @@ static inline enum comp_state do_atomic(struct rxe_qp *qp, ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &wqe->dma, &atomic_orig, - sizeof(u64), to_mr_obj, NULL); + sizeof(u64), RXE_TO_MR_OBJ, NULL); if (ret) return COMPST_ERROR; else diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index e97048e77451..8a4633aeb2a8 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -71,46 +71,29 @@ struct rxe_mmap_info *rxe_create_mmap_info(struct rxe_dev *dev, u32 size, int rxe_mmap(struct ib_ucontext *context, struct vm_area_struct *vma); /* rxe_mr.c */ -enum copy_direction { - to_mr_obj, - from_mr_obj, -}; - u8 rxe_get_next_key(u32 last_key); void rxe_mr_init_dma(struct rxe_pd *pd, int access, struct rxe_mr *mr); - int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova, int access, struct rxe_mr *mr); - int rxe_mr_init_fast(struct rxe_pd *pd, int max_pages, struct rxe_mr *mr); - int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, - enum copy_direction dir, u32 *crcp); - + enum rxe_mr_copy_dir dir, u32 *crcp); int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, void *addr, int length, - enum copy_direction dir, u32 *crcp); - + enum rxe_mr_copy_dir dir, u32 *crcp); void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length); - -enum lookup_type { - lookup_local, - lookup_remote, -}; - struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, - enum lookup_type type); - + enum rxe_mr_lookup_type type); int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length); - -void rxe_mr_cleanup(struct rxe_pool_entry *arg); - int advance_dma_data(struct rxe_dma_info *dma, unsigned int length); +int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey); +void rxe_mr_cleanup(struct rxe_pool_entry *arg); /* rxe_mw.c */ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata); int rxe_dealloc_mw(struct ib_mw *ibmw); int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe); +int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey); void rxe_mw_cleanup(struct rxe_pool_entry *arg); /* rxe_net.c */ diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index cfd35a442c10..3fb58d2c7814 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -55,21 +55,6 @@ static void rxe_mr_init(int access, struct rxe_mr *mr) mr->map_shift = ilog2(RXE_BUF_PER_MAP); } -void rxe_mr_cleanup(struct rxe_pool_entry *arg) -{ - struct rxe_mr *mr = container_of(arg, typeof(*mr), pelem); - int i; - - ib_umem_release(mr->umem); - - if (mr->map) { - for (i = 0; i < mr->num_map; i++) - kfree(mr->map[i]); - - kfree(mr->map); - } -} - static int rxe_mr_alloc(struct rxe_mr *mr, int num_buf) { int i; @@ -298,7 +283,7 @@ void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length) * crc32 if crcp is not zero. caller must hold a reference to mr */ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, - enum copy_direction dir, u32 *crcp) + enum rxe_mr_copy_dir dir, u32 *crcp) { int err; int bytes; @@ -316,9 +301,9 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, if (mr->type == RXE_MR_TYPE_DMA) { u8 *src, *dest; - src = (dir == to_mr_obj) ? addr : ((void *)(uintptr_t)iova); + src = (dir == RXE_TO_MR_OBJ) ? addr : ((void *)(uintptr_t)iova); - dest = (dir == to_mr_obj) ? ((void *)(uintptr_t)iova) : addr; + dest = (dir == RXE_TO_MR_OBJ) ? ((void *)(uintptr_t)iova) : addr; memcpy(dest, src, length); @@ -346,8 +331,8 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, u8 *src, *dest; va = (u8 *)(uintptr_t)buf->addr + offset; - src = (dir == to_mr_obj) ? addr : va; - dest = (dir == to_mr_obj) ? va : addr; + src = (dir == RXE_TO_MR_OBJ) ? addr : va; + dest = (dir == RXE_TO_MR_OBJ) ? va : addr; bytes = buf->size - offset; @@ -392,7 +377,7 @@ int copy_data( struct rxe_dma_info *dma, void *addr, int length, - enum copy_direction dir, + enum rxe_mr_copy_dir dir, u32 *crcp) { int bytes; @@ -412,7 +397,7 @@ int copy_data( } if (sge->length && (offset < sge->length)) { - mr = lookup_mr(pd, access, sge->lkey, lookup_local); + mr = lookup_mr(pd, access, sge->lkey, RXE_LOOKUP_LOCAL); if (!mr) { err = -EINVAL; goto err1; @@ -438,7 +423,7 @@ int copy_data( if (sge->length) { mr = lookup_mr(pd, access, sge->lkey, - lookup_local); + RXE_LOOKUP_LOCAL); if (!mr) { err = -EINVAL; goto err1; @@ -520,7 +505,7 @@ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length) * (4) verify that mr state is valid */ struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, - enum lookup_type type) + enum rxe_mr_lookup_type type) { struct rxe_mr *mr; struct rxe_dev *rxe = to_rdev(pd->ibpd.device); @@ -530,8 +515,8 @@ struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, if (!mr) return NULL; - if (unlikely((type == lookup_local && mr_lkey(mr) != key) || - (type == lookup_remote && mr_rkey(mr) != key) || + if (unlikely((type == RXE_LOOKUP_LOCAL && mr_lkey(mr) != key) || + (type == RXE_LOOKUP_REMOTE && mr_rkey(mr) != key) || mr_pd(mr) != pd || (access && !(access & mr->access)) || mr->state != RXE_MR_STATE_VALID)) { rxe_drop_ref(mr); @@ -540,3 +525,47 @@ struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, return mr; } + +int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + struct rxe_mr *mr; + int ret; + + mr = rxe_pool_get_index(&rxe->mr_pool, rkey >> 8); + if (!mr) { + pr_err("%s: No MR for rkey %#x\n", __func__, rkey); + ret = -EINVAL; + goto err; + } + + if (rkey != mr->ibmr.rkey) { + pr_err("%s: rkey (%#x) doesn't match mr->ibmr.rkey (%#x)\n", + __func__, rkey, mr->ibmr.rkey); + ret = -EINVAL; + goto err_drop_ref; + } + + mr->state = RXE_MR_STATE_FREE; + ret = 0; + +err_drop_ref: + rxe_drop_ref(mr); +err: + return ret; +} + +void rxe_mr_cleanup(struct rxe_pool_entry *arg) +{ + struct rxe_mr *mr = container_of(arg, typeof(*mr), pelem); + int i; + + ib_umem_release(mr->umem); + + if (mr->map) { + for (i = 0; i < mr->num_map; i++) + kfree(mr->map[i]); + + kfree(mr->map); + } +} diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 65215dde9974..594f8cef0a08 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -245,6 +245,73 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe) return ret; } +static int rxe_check_invalidate_mw(struct rxe_qp *qp, struct rxe_mw *mw) +{ + if (unlikely(mw->state == RXE_MW_STATE_INVALID)) + return -EINVAL; + + /* o10-37.2.26 */ + if (unlikely(mw->ibmw.type == IB_MW_TYPE_1)) + return -EINVAL; + + return 0; +} + +static void rxe_do_invalidate_mw(struct rxe_mw *mw) +{ + struct rxe_qp *qp; + struct rxe_mr *mr; + + /* valid type 2 MW will always have a QP pointer */ + qp = mw->qp; + mw->qp = NULL; + rxe_drop_ref(qp); + + /* valid type 2 MW will always have an MR pointer */ + mr = mw->mr; + mw->mr = NULL; + atomic_dec(&mr->num_mw); + rxe_drop_ref(mr); + + mw->access = 0; + mw->addr = 0; + mw->length = 0; + mw->state = RXE_MW_STATE_FREE; +} + +int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + unsigned long flags; + struct rxe_mw *mw; + int ret; + + mw = rxe_pool_get_index(&rxe->mw_pool, rkey >> 8); + if (!mw) { + ret = -EINVAL; + goto err; + } + + if (rkey != mw->ibmw.rkey) { + ret = -EINVAL; + goto err_drop_ref; + } + + spin_lock_irqsave(&mw->lock, flags); + + ret = rxe_check_invalidate_mw(qp, mw); + if (ret) + goto err_unlock; + + rxe_do_invalidate_mw(mw); +err_unlock: + spin_unlock_irqrestore(&mw->lock, flags); +err_drop_ref: + rxe_drop_ref(mw); +err: + return ret; +} + void rxe_mw_cleanup(struct rxe_pool_entry *elem) { struct rxe_mw *mw = container_of(elem, typeof(*mw), pelem); diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 6583f8ca95dc..c57699cc6578 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -487,7 +487,7 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe, } else { err = copy_data(qp->pd, 0, &wqe->dma, payload_addr(pkt), paylen, - from_mr_obj, + RXE_FROM_MR_OBJ, &crc); if (err) return err; @@ -581,27 +581,25 @@ static void update_state(struct rxe_qp *qp, struct rxe_send_wqe *wqe, static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe) { u8 opcode = wqe->wr.opcode; - struct rxe_dev *rxe; struct rxe_mr *mr; u32 rkey; int ret; switch (opcode) { case IB_WR_LOCAL_INV: - rxe = to_rdev(qp->ibqp.device); rkey = wqe->wr.ex.invalidate_rkey; - mr = rxe_pool_get_index(&rxe->mr_pool, rkey >> 8); - if (!mr) { - pr_err("No MR for rkey %#x\n", rkey); + if (rkey_is_mw(rkey)) + ret = rxe_invalidate_mw(qp, rkey); + else + ret = rxe_invalidate_mr(qp, rkey); + + if (unlikely(ret)) { wqe->status = IB_WC_LOC_QP_OP_ERR; - return -EINVAL; + return ret; } - mr->state = RXE_MR_STATE_FREE; - rxe_drop_ref(mr); break; case IB_WR_REG_MR: mr = to_rmr(wqe->wr.wr.reg.mr); - rxe_add_ref(mr); mr->state = RXE_MR_STATE_VALID; mr->access = wqe->wr.wr.reg.access; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 9c0ce1a4f2ea..5eca374149f7 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -35,6 +35,7 @@ enum resp_states { RESPST_ERR_TOO_MANY_RDMA_ATM_REQ, RESPST_ERR_RNR, RESPST_ERR_RKEY_VIOLATION, + RESPST_ERR_INVALIDATE_RKEY, RESPST_ERR_LENGTH, RESPST_ERR_CQ_OVERFLOW, RESPST_ERROR, @@ -68,6 +69,7 @@ static char *resp_state_name[] = { [RESPST_ERR_TOO_MANY_RDMA_ATM_REQ] = "ERR_TOO_MANY_RDMA_ATM_REQ", [RESPST_ERR_RNR] = "ERR_RNR", [RESPST_ERR_RKEY_VIOLATION] = "ERR_RKEY_VIOLATION", + [RESPST_ERR_INVALIDATE_RKEY] = "ERR_INVALIDATE_RKEY_VIOLATION", [RESPST_ERR_LENGTH] = "ERR_LENGTH", [RESPST_ERR_CQ_OVERFLOW] = "ERR_CQ_OVERFLOW", [RESPST_ERROR] = "ERROR", @@ -449,7 +451,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, resid = qp->resp.resid; pktlen = payload_size(pkt); - mr = lookup_mr(qp->pd, access, rkey, lookup_remote); + mr = lookup_mr(qp->pd, access, rkey, RXE_LOOKUP_REMOTE); if (!mr) { state = RESPST_ERR_RKEY_VIOLATION; goto err; @@ -503,7 +505,7 @@ static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, int err; err = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, - data_addr, data_len, to_mr_obj, NULL); + data_addr, data_len, RXE_TO_MR_OBJ, NULL); if (unlikely(err)) return (err == -ENOSPC) ? RESPST_ERR_LENGTH : RESPST_ERR_MALFORMED_WQE; @@ -519,7 +521,7 @@ static enum resp_states write_data_in(struct rxe_qp *qp, int data_len = payload_size(pkt); err = rxe_mr_copy(qp->resp.mr, qp->resp.va, payload_addr(pkt), data_len, - to_mr_obj, NULL); + RXE_TO_MR_OBJ, NULL); if (err) { rc = RESPST_ERR_RKEY_VIOLATION; goto out; @@ -720,7 +722,7 @@ static enum resp_states read_reply(struct rxe_qp *qp, return RESPST_ERR_RNR; err = rxe_mr_copy(res->read.mr, res->read.va, payload_addr(&ack_pkt), - payload, from_mr_obj, &icrc); + payload, RXE_FROM_MR_OBJ, &icrc); if (err) pr_err("Failed copying memory\n"); @@ -770,6 +772,14 @@ static void build_rdma_network_hdr(union rdma_network_hdr *hdr, memcpy(&hdr->ibgrh, ipv6_hdr(skb), sizeof(hdr->ibgrh)); } +static int invalidate_rkey(struct rxe_qp *qp, u32 rkey) +{ + if (rkey_is_mw(rkey)) + return rxe_invalidate_mw(qp, rkey); + else + return rxe_invalidate_mr(qp, rkey); +} + /* Executes a new request. A retried request never reach that function (send * and writes are discarded, and reads and atomics are retried elsewhere. */ @@ -809,6 +819,14 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt) WARN_ON_ONCE(1); } + if (pkt->mask & RXE_IETH_MASK) { + u32 rkey = ieth_rkey(pkt); + + err = invalidate_rkey(qp, rkey); + if (err) + return RESPST_ERR_INVALIDATE_RKEY; + } + /* next expected psn, read handles this separately */ qp->resp.psn = (pkt->psn + 1) & BTH_PSN_MASK; qp->resp.ack_psn = qp->resp.psn; @@ -841,13 +859,13 @@ static enum resp_states do_complete(struct rxe_qp *qp, memset(&cqe, 0, sizeof(cqe)); if (qp->rcq->is_user) { - uwc->status = qp->resp.status; - uwc->qp_num = qp->ibqp.qp_num; - uwc->wr_id = wqe->wr_id; + uwc->status = qp->resp.status; + uwc->qp_num = qp->ibqp.qp_num; + uwc->wr_id = wqe->wr_id; } else { - wc->status = qp->resp.status; - wc->qp = &qp->ibqp; - wc->wr_id = wqe->wr_id; + wc->status = qp->resp.status; + wc->qp = &qp->ibqp; + wc->wr_id = wqe->wr_id; } if (wc->status == IB_WC_SUCCESS) { @@ -902,27 +920,14 @@ static enum resp_states do_complete(struct rxe_qp *qp, } if (pkt->mask & RXE_IETH_MASK) { - struct rxe_mr *rmr; - wc->wc_flags |= IB_WC_WITH_INVALIDATE; wc->ex.invalidate_rkey = ieth_rkey(pkt); - - rmr = rxe_pool_get_index(&rxe->mr_pool, - wc->ex.invalidate_rkey >> 8); - if (unlikely(!rmr)) { - pr_err("Bad rkey %#x invalidation\n", - wc->ex.invalidate_rkey); - return RESPST_ERROR; - } - rmr->state = RXE_MR_STATE_FREE; - rxe_drop_ref(rmr); } - wc->qp = &qp->ibqp; - if (pkt->mask & RXE_DETH_MASK) wc->src_qp = deth_sqp(pkt); + wc->qp = &qp->ibqp; wc->port_num = qp->attr.port_num; } } @@ -1336,6 +1341,13 @@ int rxe_responder(void *arg) } break; + case RESPST_ERR_INVALIDATE_RKEY: + /* RC - Class J. */ + qp->resp.goto_error = 1; + qp->resp.status = IB_WC_REM_INV_REQ_ERR; + state = RESPST_COMPLETE; + break; + case RESPST_ERR_LENGTH: if (qp_type(qp) == IB_QPT_RC) { /* Class C */ diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 3d0ab8b7804f..47399bf7517e 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -278,6 +278,16 @@ enum rxe_mr_type { RXE_MR_TYPE_MR, }; +enum rxe_mr_copy_dir { + RXE_TO_MR_OBJ, + RXE_FROM_MR_OBJ, +}; + +enum rxe_mr_lookup_type { + RXE_LOOKUP_LOCAL, + RXE_LOOKUP_REMOTE, +}; + #define RXE_BUF_PER_MAP (PAGE_SIZE / sizeof(struct rxe_phys_buf)) struct rxe_phys_buf { @@ -289,6 +299,13 @@ struct rxe_map { struct rxe_phys_buf buf[RXE_BUF_PER_MAP]; }; +static inline int rkey_is_mw(u32 rkey) +{ + u32 index = rkey >> 8; + + return (index >= RXE_MIN_MW_INDEX) && (index <= RXE_MAX_MW_INDEX); +} + struct rxe_mr { struct rxe_pool_entry pelem; struct ib_mr ibmr; @@ -314,23 +331,23 @@ struct rxe_mr { u32 max_buf; u32 num_map; - struct rxe_map **map; - atomic_t num_mw; + + struct rxe_map **map; }; enum rxe_mw_state { - RXE_MW_STATE_INVALID = RXE_MR_STATE_INVALID, - RXE_MW_STATE_FREE = RXE_MR_STATE_FREE, - RXE_MW_STATE_VALID = RXE_MR_STATE_VALID, + RXE_MW_STATE_INVALID = RXE_MR_STATE_INVALID, + RXE_MW_STATE_FREE = RXE_MR_STATE_FREE, + RXE_MW_STATE_VALID = RXE_MR_STATE_VALID, }; struct rxe_mw { - struct ib_mw ibmw; - struct rxe_pool_entry pelem; + struct ib_mw ibmw; + struct rxe_pool_entry pelem; spinlock_t lock; enum rxe_mw_state state; - struct rxe_qp *qp; /* Type 2 only */ + struct rxe_qp *qp; /* Type 2 only */ struct rxe_mr *mr; int access; u64 addr; From patchwork Tue Jun 8 04:25:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12305353 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5543C47082 for ; Tue, 8 Jun 2021 04:26:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A7AA860240 for ; Tue, 8 Jun 2021 04:26:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230268AbhFHE22 (ORCPT ); Tue, 8 Jun 2021 00:28:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230223AbhFHE2Z (ORCPT ); Tue, 8 Jun 2021 00:28:25 -0400 Received: from mail-oi1-x22a.google.com (mail-oi1-x22a.google.com [IPv6:2607:f8b0:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB31CC06178B for ; Mon, 7 Jun 2021 21:26:19 -0700 (PDT) Received: by mail-oi1-x22a.google.com with SMTP id r17so10090640oic.7 for ; Mon, 07 Jun 2021 21:26:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cMvBAEQuMBFtNREjn+AvA1qtn5/GXMySLD9mvP59vxI=; b=AjWdvUdNY9IMzymDEI/X0G4bXD8dLOfEYYXWuzdCE+q1L8fibLtDWC745rWsZbqPfu cp+Jbk2hOty3X0Wc0SVdlSoByhAwp6HMR4WLkshU//axS60WoGmkNiGibR0wtv9b4zXt gsdxsOrYXjhZm97inHp4vyAuNYzBz+3vZHdZwPv+R1ry6Zulv7zkdDU1Ddz9OwkX3V5S kSJvSg1gwFgRrIPtZtwwjCxV2epTDPl10m8dvRJ+wYAqi4p+Her7kdyUUY0gETiQmSQU onKJV5gF34Q7Q6oSCMIMLMbjhKfrO7o2Gl7vROr6nVLjoDYqxnIiDumDHtZ0oicCHLkq fq2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cMvBAEQuMBFtNREjn+AvA1qtn5/GXMySLD9mvP59vxI=; b=KEqNvveelNstdkDjHXCqKTEDGeEg6wMRCQVG+8/eWdSbiJMs+hh3iWRGGXrCtft5wc 9WpGHEe7xlBKMtAj3gaWLmZyEF701ITCqF6BQ5q+GIZKhMjrv572qy3wePGbmv4CFQ4z VooHvRfmpZHTTu+24Xpn/+PFb+61kHiIIVe6ZBFEavNLb+70ExyhgTBYoX5JUeyIYSat rFwJ+GmoxDLI+4p6/aFw5VWMXsWyWgQnxbnusr3c+eEF6vJvHr3KMT0GMzjCirnN+NjO Y+1yLqiAVsNxUzgpYwi+4Z3FFOI71GEZK4p5M2MW9aS7mBNsy1c14cpzYbvNaKvgNs4R LK1g== X-Gm-Message-State: AOAM532441e8s9zVgKjfa4tKYUaH/vlZ7b6odkEMv4AOQoR5JvDXjClV IK6/jFiaSUGkDGSAnHw7bWg= X-Google-Smtp-Source: ABdhPJxALMMXNOUhhmdtnBYytQ2oxOeiLIaforEGH8RQyLa9Rx0VrKxwCYYTu22ndx6FKv2MY1FrLQ== X-Received: by 2002:a05:6808:1448:: with SMTP id x8mr1629903oiv.148.1623126379122; Mon, 07 Jun 2021 21:26:19 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-cb25-4f27-0965-41cc.res6.spectrum.com. [2603:8081:140c:1a00:cb25:4f27:965:41cc]) by smtp.gmail.com with ESMTPSA id 88sm2828873otb.7.2021.06.07.21.26.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Jun 2021 21:26:18 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, monis@mellanox.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v9 09/10] RDMA/rxe: Implement memory access through MWs Date: Mon, 7 Jun 2021 23:25:52 -0500 Message-Id: <20210608042552.33275-10-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210608042552.33275-1-rpearsonhpe@gmail.com> References: <20210608042552.33275-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add code to implement memory access through memory windows. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 1 + drivers/infiniband/sw/rxe/rxe_mw.c | 23 +++++++++++ drivers/infiniband/sw/rxe/rxe_resp.c | 55 +++++++++++++++++++-------- drivers/infiniband/sw/rxe/rxe_verbs.h | 11 ++++++ 4 files changed, 75 insertions(+), 15 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 8a4633aeb2a8..6e4b5e22541e 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -94,6 +94,7 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata); int rxe_dealloc_mw(struct ib_mw *ibmw); int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe); int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey); +struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey); void rxe_mw_cleanup(struct rxe_pool_entry *arg); /* rxe_net.c */ diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 594f8cef0a08..5ba77df7598e 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -312,6 +312,29 @@ int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey) return ret; } +struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + struct rxe_pd *pd = to_rpd(qp->ibqp.pd); + struct rxe_mw *mw; + int index = rkey >> 8; + + mw = rxe_pool_get_index(&rxe->mw_pool, index); + if (!mw) + return NULL; + + if (unlikely((rxe_mw_rkey(mw) != rkey) || rxe_mw_pd(mw) != pd || + (mw->ibmw.type == IB_MW_TYPE_2 && mw->qp != qp) || + (mw->length == 0) || + (access && !(access & mw->access)) || + mw->state != RXE_MW_STATE_VALID)) { + rxe_drop_ref(mw); + return NULL; + } + + return mw; +} + void rxe_mw_cleanup(struct rxe_pool_entry *elem) { struct rxe_mw *mw = container_of(elem, typeof(*mw), pelem); diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 759e9789cd4d..1ea576d42882 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -394,6 +394,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { struct rxe_mr *mr = NULL; + struct rxe_mw *mw = NULL; u64 va; u32 rkey; u32 resid; @@ -405,6 +406,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, if (pkt->mask & (RXE_READ_MASK | RXE_WRITE_MASK)) { if (pkt->mask & RXE_RETH_MASK) { qp->resp.va = reth_va(pkt); + qp->resp.offset = 0; qp->resp.rkey = reth_rkey(pkt); qp->resp.resid = reth_len(pkt); qp->resp.length = reth_len(pkt); @@ -413,6 +415,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, : IB_ACCESS_REMOTE_WRITE; } else if (pkt->mask & RXE_ATOMIC_MASK) { qp->resp.va = atmeth_va(pkt); + qp->resp.offset = 0; qp->resp.rkey = atmeth_rkey(pkt); qp->resp.resid = sizeof(u64); access = IB_ACCESS_REMOTE_ATOMIC; @@ -432,18 +435,36 @@ static enum resp_states check_rkey(struct rxe_qp *qp, resid = qp->resp.resid; pktlen = payload_size(pkt); - mr = lookup_mr(qp->pd, access, rkey, RXE_LOOKUP_REMOTE); - if (!mr) { - state = RESPST_ERR_RKEY_VIOLATION; - goto err; - } + if (rkey_is_mw(rkey)) { + mw = rxe_lookup_mw(qp, access, rkey); + if (!mw) { + pr_err("%s: no MW matches rkey %#x\n", __func__, rkey); + state = RESPST_ERR_RKEY_VIOLATION; + goto err; + } - if (unlikely(mr->state == RXE_MR_STATE_FREE)) { - state = RESPST_ERR_RKEY_VIOLATION; - goto err; + mr = mw->mr; + if (!mr) { + pr_err("%s: MW doesn't have an MR\n", __func__); + state = RESPST_ERR_RKEY_VIOLATION; + goto err; + } + + if (mw->access & IB_ZERO_BASED) + qp->resp.offset = mw->addr; + + rxe_drop_ref(mw); + rxe_add_ref(mr); + } else { + mr = lookup_mr(qp->pd, access, rkey, RXE_LOOKUP_REMOTE); + if (!mr) { + pr_err("%s: no MR matches rkey %#x\n", __func__, rkey); + state = RESPST_ERR_RKEY_VIOLATION; + goto err; + } } - if (mr_check_range(mr, va, resid)) { + if (mr_check_range(mr, va + qp->resp.offset, resid)) { state = RESPST_ERR_RKEY_VIOLATION; goto err; } @@ -477,6 +498,9 @@ static enum resp_states check_rkey(struct rxe_qp *qp, err: if (mr) rxe_drop_ref(mr); + if (mw) + rxe_drop_ref(mw); + return state; } @@ -501,8 +525,8 @@ static enum resp_states write_data_in(struct rxe_qp *qp, int err; int data_len = payload_size(pkt); - err = rxe_mr_copy(qp->resp.mr, qp->resp.va, payload_addr(pkt), data_len, - RXE_TO_MR_OBJ, NULL); + err = rxe_mr_copy(qp->resp.mr, qp->resp.va + qp->resp.offset, + payload_addr(pkt), data_len, RXE_TO_MR_OBJ, NULL); if (err) { rc = RESPST_ERR_RKEY_VIOLATION; goto out; @@ -521,7 +545,6 @@ static DEFINE_SPINLOCK(atomic_ops_lock); static enum resp_states process_atomic(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { - u64 iova = atmeth_va(pkt); u64 *vaddr; enum resp_states ret; struct rxe_mr *mr = qp->resp.mr; @@ -531,7 +554,7 @@ static enum resp_states process_atomic(struct rxe_qp *qp, goto out; } - vaddr = iova_to_vaddr(mr, iova, sizeof(u64)); + vaddr = iova_to_vaddr(mr, qp->resp.va + qp->resp.offset, sizeof(u64)); /* check vaddr is 8 bytes aligned. */ if (!vaddr || (uintptr_t)vaddr & 7) { @@ -655,8 +678,10 @@ static enum resp_states read_reply(struct rxe_qp *qp, res->type = RXE_READ_MASK; res->replay = 0; - res->read.va = qp->resp.va; - res->read.va_org = qp->resp.va; + res->read.va = qp->resp.va + + qp->resp.offset; + res->read.va_org = qp->resp.va + + qp->resp.offset; res->first_psn = req_pkt->psn; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 74fcd871757d..cf8cae64f7df 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -183,6 +183,7 @@ struct rxe_resp_info { /* RDMA read / atomic only */ u64 va; + u64 offset; struct rxe_mr *mr; u32 resid; u32 rkey; @@ -480,6 +481,16 @@ static inline u32 mr_rkey(struct rxe_mr *mr) return mr->ibmr.rkey; } +static inline struct rxe_pd *rxe_mw_pd(struct rxe_mw *mw) +{ + return to_rpd(mw->ibmw.pd); +} + +static inline u32 rxe_mw_rkey(struct rxe_mw *mw) +{ + return mw->ibmw.rkey; +} + int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name); void rxe_mc_cleanup(struct rxe_pool_entry *arg); From patchwork Tue Jun 8 04:25:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12305355 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FF20C4743F for ; Tue, 8 Jun 2021 04:26:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 681F060240 for ; Tue, 8 Jun 2021 04:26:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230226AbhFHE23 (ORCPT ); Tue, 8 Jun 2021 00:28:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230254AbhFHE2Z (ORCPT ); Tue, 8 Jun 2021 00:28:25 -0400 Received: from mail-oi1-x231.google.com (mail-oi1-x231.google.com [IPv6:2607:f8b0:4864:20::231]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75A3CC0617A6 for ; Mon, 7 Jun 2021 21:26:20 -0700 (PDT) Received: by mail-oi1-x231.google.com with SMTP id t140so15092047oih.0 for ; Mon, 07 Jun 2021 21:26:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cE7E6256nwQASA75N374ABr0fpgsDymtSFvOShfQN9I=; b=AkzjcBJz9pG+lr8OJSuurLWfoXkyd1etMgRIb8/Iu+5wTOqTwJy+kgpg7uoVOy6CEM /dN9mxnnDO4Lnn5D7uiBOSM0OShame0LX53sNqQm0qQrYZOHjMiRxn2aRU3Doa//RG8D fwsH3eV63vXYDrCrJ+8GcZRTzPEFeLN4vODBesH4mUJfIdgU4N/Th2FvDDy33ogXANrm ZzRatMboVsCoM5TnWVTAds598VugFJvs411ZCH1Fki2oU0jx0h7Ww3TIjvTP8ZyxQ6CA VlstFU/Dp/qWmOaTiA1aR5rkEuDzrnk8zOzYLZJqnLczAOQTAc112mpVgyZ5ZM9oqZv3 YW1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cE7E6256nwQASA75N374ABr0fpgsDymtSFvOShfQN9I=; b=XPKi1C0uA2ghr0ONV+wKYrdy7754U9KRO18vm6Dq0iQaQFWpqZutMq2pNeOA/KKzME qgowjzREjLMTYhdCv97BdguFtt1Yx3KG+B4PCqTFOH1g2whGeKAiOxEWectDHsvrDZpB t/cUYn6MDYfYFvWcyWYjbaS8p1epYw4IT61okZcJbi+csCV6EaRcqxGwKcAzJhnUqqQT B9siYB1tmOcAEOOGsB0yYu0eT5OSMbnqwyIm5jlXfC94RAYZJQUxX/PnLxFGdAPoy9yC YuVa1xaEPley+EwJKZhkjAoBzh3rg8JXw3TwyPQpXpaErUxMKxvJaaLAKO9aklz4qY8s EYaQ== X-Gm-Message-State: AOAM531PoBxCcimuxdvcFJNMScHBuC5Gtheooi+PGUXY26IuhcrHnbFR vgvhSfoZIz4v0bHmgZDihNQ= X-Google-Smtp-Source: ABdhPJz9m+q0XF95C2feoWxHBoc9C52oa1Ovh2iV+ccIoBgb2qSEkklHu1AHekYTmvq2Z78vzTWs8A== X-Received: by 2002:a05:6808:d47:: with SMTP id w7mr1641579oik.104.1623126379880; Mon, 07 Jun 2021 21:26:19 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-cb25-4f27-0965-41cc.res6.spectrum.com. [2603:8081:140c:1a00:cb25:4f27:965:41cc]) by smtp.gmail.com with ESMTPSA id 7sm1254573oip.56.2021.06.07.21.26.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Jun 2021 21:26:19 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, monis@mellanox.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v9 10/10] RDMA/rxe: Disallow MR dereg and invalidate when bound Date: Mon, 7 Jun 2021 23:25:53 -0500 Message-Id: <20210608042552.33275-11-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210608042552.33275-1-rpearsonhpe@gmail.com> References: <20210608042552.33275-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Check that an MR has no bound MWs before allowing a dereg or invalidate operation. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 1 + drivers/infiniband/sw/rxe/rxe_mr.c | 25 +++++++++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_verbs.c | 11 ----------- 3 files changed, 26 insertions(+), 11 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 6e4b5e22541e..1ddb20855dee 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -87,6 +87,7 @@ struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length); int advance_dma_data(struct rxe_dma_info *dma, unsigned int length); int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey); +int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata); void rxe_mr_cleanup(struct rxe_pool_entry *arg); /* rxe_mw.c */ diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 3fb58d2c7814..7f169329a8bf 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -546,6 +546,13 @@ int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey) goto err_drop_ref; } + if (atomic_read(&mr->num_mw) > 0) { + pr_warn("%s: Attempt to invalidate an MR while bound to MWs\n", + __func__); + ret = -EINVAL; + goto err_drop_ref; + } + mr->state = RXE_MR_STATE_FREE; ret = 0; @@ -555,6 +562,24 @@ int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey) return ret; } +int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) +{ + struct rxe_mr *mr = to_rmr(ibmr); + + if (atomic_read(&mr->num_mw) > 0) { + pr_warn("%s: Attempt to deregister an MR while bound to MWs\n", + __func__); + return -EINVAL; + } + + mr->state = RXE_MR_STATE_ZOMBIE; + rxe_drop_ref(mr_pd(mr)); + rxe_drop_index(mr); + rxe_drop_ref(mr); + + return 0; +} + void rxe_mr_cleanup(struct rxe_pool_entry *arg) { struct rxe_mr *mr = container_of(arg, typeof(*mr), pelem); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 4860e8ab378e..3e0bab4994d1 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -913,17 +913,6 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, return ERR_PTR(err); } -static int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) -{ - struct rxe_mr *mr = to_rmr(ibmr); - - mr->state = RXE_MR_STATE_ZOMBIE; - rxe_drop_ref(mr_pd(mr)); - rxe_drop_index(mr); - rxe_drop_ref(mr); - return 0; -} - static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, u32 max_num_sg) {