From patchwork Fri May 21 20:18:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12273693 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6071EC4707A for ; Fri, 21 May 2021 20:19:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3F79861164 for ; Fri, 21 May 2021 20:19:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229655AbhEUUUh (ORCPT ); Fri, 21 May 2021 16:20:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230020AbhEUUUh (ORCPT ); Fri, 21 May 2021 16:20:37 -0400 Received: from mail-oo1-xc35.google.com (mail-oo1-xc35.google.com [IPv6:2607:f8b0:4864:20::c35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7718C061574 for ; Fri, 21 May 2021 13:19:13 -0700 (PDT) Received: by mail-oo1-xc35.google.com with SMTP id v13-20020a4ac00d0000b029020b43b918eeso4848684oop.9 for ; Fri, 21 May 2021 13:19:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=u15gBJyBov1F5ELrcRq85EILKQdb02DcggRIlXJ/dPY=; b=EQf+mlxly89ZrpnI1G0/2nvl0NucHPAcGskaSLvSrtA1aazL1Z4GfTLndeRlE4ce1J AE6IA6oK13jB9Mr24KPbqXNElzJPSBZi34GiTenS2JnZ5ESkCAQ55iKlVVimueaJeAEA SLJOldxIOA2TbsJYkDcEQJZiRo+Fq2Epc7FnhOQKG3vNqVrtqK7wjkBdYqkUuHRCe/RV pJ7ou3see8yNIVFVoTxCXUmp6lbkk33FUdgXRM7sS/zu+D6L8khn/k/w9I3BbVqKEzag oUakNOc/o1oEWLZO9Ru3Hn+a9H8tT3lqn+zfNkZ4ndyGam2pIe6igrgelYMC8Y6zxkOM eQGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=u15gBJyBov1F5ELrcRq85EILKQdb02DcggRIlXJ/dPY=; b=RH5w1DW4NFzxDHAScdohJWe8Emk6BKjrOlRXkYL/PZfAuLYFISrEfIdD5NW5ngcBoS 1RB292s80YJOxBVxCusbg5Zx7mfceu53/CQ6I/PDr6FPSp3TblD4mg6nVP8olZxDqhfG ZkaUiBXVtoUp9dF4vWEVmI+Z8DFIMCKFZtrwdTjOVMiObNR1GzGMI009UJSs50I4/loh oLVXpYUji2p4are7biqDej8nENRGON/QPM/FQ6zcWEfRx39452+Xq6WDS2M+GZP4tgJb HuQec/5P13UGU9RW3utwnYCm0VPEr6CK+MRYGZeoXFgCNx7wsJ+xW0SiV/5j58lQUkFm rSKA== X-Gm-Message-State: AOAM530Nee7w0oRQs1DrtmhoBj9P38XU8P3jkwR/Iry99DxPzojhIMnG R+bMmwRFko59M4nnVX8gRzl9eBCqz1Byxw== X-Google-Smtp-Source: ABdhPJw1scD0qykNw9ymzcXe691pqyDqt1kuVGCOP8CvhVCok83WnXZztwaqe32HgJBIV+AuNtqxfA== X-Received: by 2002:a4a:88f2:: with SMTP id q47mr9609884ooh.30.1621628353187; Fri, 21 May 2021 13:19:13 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-7300-72eb-72bd-e6db.res6.spectrum.com. [2603:8081:140c:1a00:7300:72eb:72bd:e6db]) by smtp.gmail.com with ESMTPSA id s2sm431486otg.67.2021.05.21.13.19.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 May 2021 13:19:12 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v7 01/10] RDMA/rxe: Add bind MW fields to rxe_send_wr Date: Fri, 21 May 2021 15:18:16 -0500 Message-Id: <20210521201824.659565-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210521201824.659565-1-rpearsonhpe@gmail.com> References: <20210521201824.659565-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add fields to struct rxe_send_wr in rdma_user_rxe.h to support bind MW work requests Signed-off-by: Bob Pearson --- include/uapi/rdma/rdma_user_rxe.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/uapi/rdma/rdma_user_rxe.h b/include/uapi/rdma/rdma_user_rxe.h index 068433e2229d..b8f408ceb1a8 100644 --- a/include/uapi/rdma/rdma_user_rxe.h +++ b/include/uapi/rdma/rdma_user_rxe.h @@ -99,7 +99,17 @@ struct rxe_send_wr { __u32 remote_qkey; __u16 pkey_index; } ud; + struct { + __aligned_u64 addr; + __aligned_u64 length; + __u32 mr_lkey; + __u32 mw_rkey; + __u32 rkey; + __u32 access; + __u32 flags; + } mw; /* reg is only used by the kernel and is not part of the uapi */ +#ifdef __KERNEL__ struct { union { struct ib_mr *mr; @@ -108,6 +118,7 @@ struct rxe_send_wr { __u32 key; __u32 access; } reg; +#endif } wr; }; From patchwork Fri May 21 20:18:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12273695 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CBA3C4707F for ; Fri, 21 May 2021 20:19:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2F2FC61164 for ; Fri, 21 May 2021 20:19:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230104AbhEUUUi (ORCPT ); Fri, 21 May 2021 16:20:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230020AbhEUUUi (ORCPT ); Fri, 21 May 2021 16:20:38 -0400 Received: from mail-ot1-x331.google.com (mail-ot1-x331.google.com [IPv6:2607:f8b0:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75252C061574 for ; Fri, 21 May 2021 13:19:14 -0700 (PDT) Received: by mail-ot1-x331.google.com with SMTP id 36-20020a9d0ba70000b02902e0a0a8fe36so19112072oth.8 for ; Fri, 21 May 2021 13:19:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JaVBFll1fzHgw1AwqfCJ/5vxLd0wSQ5LJxKK9vIPE14=; b=rARngltpD+w3zMZfAjVxXikerd8MzYBWXGwwZqn86kR77dP2IrqgFBXB2yFtnVTdCC 1MI5BS+pqsOC6+s9SE/uLWUy/2LB0H86hImGNBzmJKzKo7v3BOvZYYgyuHdyPm+SXLni RN/rqKIhYzc3JuXSXVBp7PHXEZKMrehqBQHmzyb9409s/GBfb54rMver36KgQTf6f83N D5G3mbNMjv3f1WLhuP8i7S74Uh11wbH6pnLnndgIN6jKT8aNdhnGH4GsenD8Y7nO6fWD oE0uOksy/3gLvlKO+TLBfEsU4EHrBpM2KdTiXzMRMvgM7qQNj4yWlZVphDcsmBQNbY1D LrRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JaVBFll1fzHgw1AwqfCJ/5vxLd0wSQ5LJxKK9vIPE14=; b=kPlfuGlIltt6aTJHvomrwKVvI9k0AHt6ry3+EBIv3YV5Q6122Lz8oqsyhH2D1ta3/P vRmh3yryWc5wPcLlJRVBRCQfh9oQ3sQXDthZ/dKjYgsQp/s1fUKKkqhiT1xQZqywrRGd nhHCOf3jCuqosGutxqy1CKPtRwHwN60NZoCNFsxSASQuuWUxtLH6/YFDcTLXRaMHiiIe 7l5RvB89XUew/Rt36hp3jZUvByIk230nijlpL3GqO8mloqWa+bLKF37bdX3uys9kb5w9 +b8uRa7OzkfwPZlXNM/XZ7QbklS+NTcGs2OEaRIMqfKls1cQXIlBQ9Mb5NuaGnhw+UsO dtyQ== X-Gm-Message-State: AOAM5313GsgNEtSB8Y820BnuE4mcvrHSJr1JdHZKuDJrwh+rXak2rlz/ AFsZLrX0CpEtc0uNk7MKSa0= X-Google-Smtp-Source: ABdhPJxX9aLE+qC7VWTC3mk5qR7AagTMqxlonjyvPRfjEkPpjVpuEi7A/cGujqb/XXBfRfjFLOYUAQ== X-Received: by 2002:a05:6830:1687:: with SMTP id k7mr10143233otr.220.1621628353898; Fri, 21 May 2021 13:19:13 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-7300-72eb-72bd-e6db.res6.spectrum.com. [2603:8081:140c:1a00:7300:72eb:72bd:e6db]) by smtp.gmail.com with ESMTPSA id i11sm1460082otk.70.2021.05.21.13.19.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 May 2021 13:19:13 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v7 02/10] RDMA/rxe: Return errors for add index and key Date: Fri, 21 May 2021 15:18:17 -0500 Message-Id: <20210521201824.659565-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210521201824.659565-1-rpearsonhpe@gmail.com> References: <20210521201824.659565-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Modify rxe_add_index() and rxe_add_key() to return an error if the index or key is aleady present in the pool. Currently they print a warning and silently fail with bad consequences to the caller. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 44 ++++++++++++++++++---------- drivers/infiniband/sw/rxe/rxe_pool.h | 8 ++--- 2 files changed, 32 insertions(+), 20 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index d24901f2af3f..df0bec719341 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -183,7 +183,7 @@ static u32 alloc_index(struct rxe_pool *pool) return index + pool->index.min_index; } -static void insert_index(struct rxe_pool *pool, struct rxe_pool_entry *new) +static int rxe_insert_index(struct rxe_pool *pool, struct rxe_pool_entry *new) { struct rb_node **link = &pool->index.tree.rb_node; struct rb_node *parent = NULL; @@ -195,7 +195,7 @@ static void insert_index(struct rxe_pool *pool, struct rxe_pool_entry *new) if (elem->index == new->index) { pr_warn("element already exists!\n"); - goto out; + return -EINVAL; } if (elem->index > new->index) @@ -206,11 +206,11 @@ static void insert_index(struct rxe_pool *pool, struct rxe_pool_entry *new) rb_link_node(&new->index_node, parent, link); rb_insert_color(&new->index_node, &pool->index.tree); -out: - return; + + return 0; } -static void insert_key(struct rxe_pool *pool, struct rxe_pool_entry *new) +static int rxe_insert_key(struct rxe_pool *pool, struct rxe_pool_entry *new) { struct rb_node **link = &pool->key.tree.rb_node; struct rb_node *parent = NULL; @@ -226,7 +226,7 @@ static void insert_key(struct rxe_pool *pool, struct rxe_pool_entry *new) if (cmp == 0) { pr_warn("key already exists!\n"); - goto out; + return -EINVAL; } if (cmp > 0) @@ -237,26 +237,32 @@ static void insert_key(struct rxe_pool *pool, struct rxe_pool_entry *new) rb_link_node(&new->key_node, parent, link); rb_insert_color(&new->key_node, &pool->key.tree); -out: - return; + + return 0; } -void __rxe_add_key_locked(struct rxe_pool_entry *elem, void *key) +int __rxe_add_key_locked(struct rxe_pool_entry *elem, void *key) { struct rxe_pool *pool = elem->pool; + int err; memcpy((u8 *)elem + pool->key.key_offset, key, pool->key.key_size); - insert_key(pool, elem); + err = rxe_insert_key(pool, elem); + + return err; } -void __rxe_add_key(struct rxe_pool_entry *elem, void *key) +int __rxe_add_key(struct rxe_pool_entry *elem, void *key) { struct rxe_pool *pool = elem->pool; unsigned long flags; + int err; write_lock_irqsave(&pool->pool_lock, flags); - __rxe_add_key_locked(elem, key); + err = __rxe_add_key_locked(elem, key); write_unlock_irqrestore(&pool->pool_lock, flags); + + return err; } void __rxe_drop_key_locked(struct rxe_pool_entry *elem) @@ -276,22 +282,28 @@ void __rxe_drop_key(struct rxe_pool_entry *elem) write_unlock_irqrestore(&pool->pool_lock, flags); } -void __rxe_add_index_locked(struct rxe_pool_entry *elem) +int __rxe_add_index_locked(struct rxe_pool_entry *elem) { struct rxe_pool *pool = elem->pool; + int err; elem->index = alloc_index(pool); - insert_index(pool, elem); + err = rxe_insert_index(pool, elem); + + return err; } -void __rxe_add_index(struct rxe_pool_entry *elem) +int __rxe_add_index(struct rxe_pool_entry *elem) { struct rxe_pool *pool = elem->pool; unsigned long flags; + int err; write_lock_irqsave(&pool->pool_lock, flags); - __rxe_add_index_locked(elem); + err = __rxe_add_index_locked(elem); write_unlock_irqrestore(&pool->pool_lock, flags); + + return err; } void __rxe_drop_index_locked(struct rxe_pool_entry *elem) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 61210b300a78..1feca1bffced 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -111,11 +111,11 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem); /* assign an index to an indexed object and insert object into * pool's rb tree holding and not holding the pool_lock */ -void __rxe_add_index_locked(struct rxe_pool_entry *elem); +int __rxe_add_index_locked(struct rxe_pool_entry *elem); #define rxe_add_index_locked(obj) __rxe_add_index_locked(&(obj)->pelem) -void __rxe_add_index(struct rxe_pool_entry *elem); +int __rxe_add_index(struct rxe_pool_entry *elem); #define rxe_add_index(obj) __rxe_add_index(&(obj)->pelem) @@ -133,11 +133,11 @@ void __rxe_drop_index(struct rxe_pool_entry *elem); /* assign a key to a keyed object and insert object into * pool's rb tree holding and not holding pool_lock */ -void __rxe_add_key_locked(struct rxe_pool_entry *elem, void *key); +int __rxe_add_key_locked(struct rxe_pool_entry *elem, void *key); #define rxe_add_key_locked(obj, key) __rxe_add_key_locked(&(obj)->pelem, key) -void __rxe_add_key(struct rxe_pool_entry *elem, void *key); +int __rxe_add_key(struct rxe_pool_entry *elem, void *key); #define rxe_add_key(obj, key) __rxe_add_key(&(obj)->pelem, key) From patchwork Fri May 21 20:18:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12273697 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BECD1C4707E for ; Fri, 21 May 2021 20:19:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A2E7A61164 for ; Fri, 21 May 2021 20:19:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230224AbhEUUUj (ORCPT ); Fri, 21 May 2021 16:20:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36234 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230020AbhEUUUi (ORCPT ); Fri, 21 May 2021 16:20:38 -0400 Received: from mail-ot1-x332.google.com (mail-ot1-x332.google.com [IPv6:2607:f8b0:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 581B2C061574 for ; Fri, 21 May 2021 13:19:15 -0700 (PDT) Received: by mail-ot1-x332.google.com with SMTP id 36-20020a9d0ba70000b02902e0a0a8fe36so19112126oth.8 for ; Fri, 21 May 2021 13:19:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Na5RYhOx1UZ6M2KWudgqiODmlIAKYl+YyFBCmYHRPGs=; b=mF4iEPXMLeFEFSeTOLFTy7bSEr3+m7TLULqsylevsjDNFqmwzFFj7h/KNGX4h/r5Ca iHkJoRu7FoDyUUMO9mGeecaKAVh5KzSM/qjE91ytW1ka9xu/5xWsourlCSI/gsBBDMJ2 0dECoL1uIH6NhPlio87b4a1dBk9W6Zhprws3Ww4g+lNMao1jtWKsZqKeqJEVIIdQcwJ7 avrrFfeQxID2L8salO6Shv8ZOGb8/LXnQl0ZCRv45PHJ7AAjI56q3fSJBm6jlhH5qXzw ofX/AgOAENIJFAna8cqevTpDv7nnseZ6yBBbS8L5fqg/2iyFtvPo5TaSE8AX513QysYy Ldbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Na5RYhOx1UZ6M2KWudgqiODmlIAKYl+YyFBCmYHRPGs=; b=YRi/iluy1JyawiBS8UUwFzDRc/LFp09IOD6D4qU/PeGt3yq+RUCEc4sZjnS+k62biB txMNl7xUSlj77l+B2ZCcGFVGF0WkXF3Xll02kJ6wjMcwotH08J80oaKskXpRcMHJBxcu FEmt34CmuQk22uKyvxZKHcRhCR1kVtN/4TcBCkdjXKfj3uJmwLYp558BWm3O07VLrAee iPgQGeLUYLP0Xvb6Arvd6Mjm1rtVx8925tY7xT1vqvvPE1LoKZwNoJwJrx0sVQEbxQg+ aXlSJlX7htZtjIAVf+yEHqcbnFCtNXRwUPZVIDOFlN/tznRjHE3Cvba8VtAcc4Vp1Ssi Vt0Q== X-Gm-Message-State: AOAM533Zv/nlxo1ZDS+9cOdxcTItDeAuVTX7iffQxTPEHOJAwq7WofYY OsObmQt7Ia2xSVjq8nFMWOQ60wruhpDKpQ== X-Google-Smtp-Source: ABdhPJzdg/XMqpcVLmSnV66UC+ELVPT9z32EtRHwVvHoOWuDKpYM6FI7tWLfYFLp15iqRiZNfOXaxQ== X-Received: by 2002:a05:6830:3497:: with SMTP id c23mr10102421otu.98.1621628354671; Fri, 21 May 2021 13:19:14 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-7300-72eb-72bd-e6db.res6.spectrum.com. [2603:8081:140c:1a00:7300:72eb:72bd:e6db]) by smtp.gmail.com with ESMTPSA id e26sm1296104oig.9.2021.05.21.13.19.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 May 2021 13:19:14 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v7 03/10] RDMA/rxe: Enable MW object pool Date: Fri, 21 May 2021 15:18:18 -0500 Message-Id: <20210521201824.659565-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210521201824.659565-1-rpearsonhpe@gmail.com> References: <20210521201824.659565-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently the rxe driver has a rxe_mw struct object but nothing about memory windows is enabled. This patch turns on memory windows and some minor cleanup. Set device attribute in rxe.c so max_mw = MAX_MW. Change parameters in rxe_param.h so that MAX_MW is the same as MAX_MR. Reduce the number of MRs and MWs to 4K from 256K. Add device capability bits for 2a and 2b memory windows. Removed RXE_MR_TYPE_MW from the rxe_mr_type enum. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 1 + drivers/infiniband/sw/rxe/rxe_param.h | 19 ++++++++++++------- drivers/infiniband/sw/rxe/rxe_verbs.h | 1 - 3 files changed, 13 insertions(+), 8 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 95f0de0c8b49..8e0f9c489cab 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -54,6 +54,7 @@ static void rxe_init_device_param(struct rxe_dev *rxe) rxe->attr.max_cq = RXE_MAX_CQ; rxe->attr.max_cqe = (1 << RXE_MAX_LOG_CQE) - 1; rxe->attr.max_mr = RXE_MAX_MR; + rxe->attr.max_mw = RXE_MAX_MW; rxe->attr.max_pd = RXE_MAX_PD; rxe->attr.max_qp_rd_atom = RXE_MAX_QP_RD_ATOM; rxe->attr.max_res_rd_atom = RXE_MAX_RES_RD_ATOM; diff --git a/drivers/infiniband/sw/rxe/rxe_param.h b/drivers/infiniband/sw/rxe/rxe_param.h index 25ab50d9b7c2..742e6ec93686 100644 --- a/drivers/infiniband/sw/rxe/rxe_param.h +++ b/drivers/infiniband/sw/rxe/rxe_param.h @@ -37,7 +37,6 @@ static inline enum ib_mtu eth_mtu_int_to_enum(int mtu) enum rxe_device_param { RXE_MAX_MR_SIZE = -1ull, RXE_PAGE_SIZE_CAP = 0xfffff000, - RXE_MAX_QP = 0x10000, RXE_MAX_QP_WR = 0x4000, RXE_DEVICE_CAP_FLAGS = IB_DEVICE_BAD_PKEY_CNTR | IB_DEVICE_BAD_QKEY_CNTR @@ -49,7 +48,10 @@ enum rxe_device_param { | IB_DEVICE_RC_RNR_NAK_GEN | IB_DEVICE_SRQ_RESIZE | IB_DEVICE_MEM_MGT_EXTENSIONS - | IB_DEVICE_ALLOW_USER_UNREG, + | IB_DEVICE_ALLOW_USER_UNREG + | IB_DEVICE_MEM_WINDOW + | IB_DEVICE_MEM_WINDOW_TYPE_2A + | IB_DEVICE_MEM_WINDOW_TYPE_2B, RXE_MAX_SGE = 32, RXE_MAX_WQE_SIZE = sizeof(struct rxe_send_wqe) + sizeof(struct ib_sge) * RXE_MAX_SGE, @@ -58,7 +60,6 @@ enum rxe_device_param { RXE_MAX_SGE_RD = 32, RXE_MAX_CQ = 16384, RXE_MAX_LOG_CQE = 15, - RXE_MAX_MR = 256 * 1024, RXE_MAX_PD = 0x7ffc, RXE_MAX_QP_RD_ATOM = 128, RXE_MAX_RES_RD_ATOM = 0x3f000, @@ -67,7 +68,6 @@ enum rxe_device_param { RXE_MAX_MCAST_QP_ATTACH = 56, RXE_MAX_TOT_MCAST_QP_ATTACH = 0x70000, RXE_MAX_AH = 100, - RXE_MAX_SRQ = 960, RXE_MAX_SRQ_WR = 0x4000, RXE_MIN_SRQ_WR = 1, RXE_MAX_SRQ_SGE = 27, @@ -80,16 +80,21 @@ enum rxe_device_param { RXE_NUM_PORT = 1, + RXE_MAX_QP = 0x10000, RXE_MIN_QP_INDEX = 16, RXE_MAX_QP_INDEX = 0x00020000, + RXE_MAX_SRQ = 0x00001000, RXE_MIN_SRQ_INDEX = 0x00020001, RXE_MAX_SRQ_INDEX = 0x00040000, + RXE_MAX_MR = 0x00001000, + RXE_MAX_MW = 0x00001000, RXE_MIN_MR_INDEX = 0x00000001, - RXE_MAX_MR_INDEX = 0x00040000, - RXE_MIN_MW_INDEX = 0x00040001, - RXE_MAX_MW_INDEX = 0x00060000, + RXE_MAX_MR_INDEX = 0x00010000, + RXE_MIN_MW_INDEX = 0x00010001, + RXE_MAX_MW_INDEX = 0x00020000, + RXE_MAX_PKT_PER_ACK = 64, RXE_MAX_UNACKED_PSNS = 128, diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 11eba7a3ba8f..8d32e3f50813 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -273,7 +273,6 @@ enum rxe_mr_type { RXE_MR_TYPE_NONE, RXE_MR_TYPE_DMA, RXE_MR_TYPE_MR, - RXE_MR_TYPE_MW, }; #define RXE_BUF_PER_MAP (PAGE_SIZE / sizeof(struct rxe_phys_buf)) From patchwork Fri May 21 20:18:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12273699 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C325C04FF3 for ; Fri, 21 May 2021 20:19:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 33C9E61164 for ; Fri, 21 May 2021 20:19:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230020AbhEUUUj (ORCPT ); Fri, 21 May 2021 16:20:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36242 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230178AbhEUUUj (ORCPT ); Fri, 21 May 2021 16:20:39 -0400 Received: from mail-oi1-x235.google.com (mail-oi1-x235.google.com [IPv6:2607:f8b0:4864:20::235]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 188C4C0613CE for ; Fri, 21 May 2021 13:19:16 -0700 (PDT) Received: by mail-oi1-x235.google.com with SMTP id c196so12601977oib.9 for ; Fri, 21 May 2021 13:19:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7gvD+eDtxHWqWm0I2JlMrE06qVaL088iwj9Q2nGSlMA=; b=b5YcPZ1Egqyk6IRXWEL9eShNp1pkG/oflM3GCXGolgDzEVKQ5p8s4QHrOWZ5QmI/aA EpG5cytPzVrrA1lllwIQ0MBCBZZ1MKhyYJu0G7W9L+Wp2B11lq92/yXnUZSWRt06n9h2 KkAHJrjZrh9l2Yun5YppS/Hl16zIabIzau3jNVJ0fJ2S/Wuo1kTUa6KufWX/TXQTKlxj aNbo8wGtmp5IegQJ5O4tLP9HSL5QAdksE8GxkClMa7MWkOy4GAN94Iy2qwrpuMJyup7Q Xp1RHwx577O+gsh1S4JYVO1TFsA0njg7oYT/iDCvYkDTNvYXEfexDdEIrBf5qYf6IG6+ dhkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7gvD+eDtxHWqWm0I2JlMrE06qVaL088iwj9Q2nGSlMA=; b=diIr52VrrExcikqF0ZLqSMLzAV7ctDxcWo4BG1Bvb3hAvifSFyyjLhIpqwuHjkdrb8 JaAi+d4JuYQcEPZ4mRWS7XKSX3o8uHcfKKXF4yUSViwa9Z33DOQQ5+sh9WW4eEiw2neJ DnpTuf4ERIAl7pmad5dXlmC6mXcR6xd8HOibO7o35GOVH3yhSy8enU60SskJXmNikC/P 3OvUw7g+6CyMi6HJeIf7S85kgxT1PjVDqONxlqpgiCzDA59XLFdFOvHcXSeP9+0jjPvo lCb/2DyelcilzzQq/rQVKUrfj6JmOFyOfY90YoDvRKvS5qOYRiYg5qdL4g8kkXQ/vC2w MxVg== X-Gm-Message-State: AOAM531lJs5DgOBBerIULpTEiFgFKY2LP9k1lehBxDL/Z31z2SsrxLcA sdvIiKSXl8ZZ+a9f48UevJo= X-Google-Smtp-Source: ABdhPJzS7mENKiDOtaVExy4/RUSZ/baGbRxxspLLsPLv1YOfYX913XMlRzoe1NGXD2s+TD6CTbfqYw== X-Received: by 2002:aca:dd86:: with SMTP id u128mr3494865oig.155.1621628355431; Fri, 21 May 2021 13:19:15 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-7300-72eb-72bd-e6db.res6.spectrum.com. [2603:8081:140c:1a00:7300:72eb:72bd:e6db]) by smtp.gmail.com with ESMTPSA id f2sm1450015otp.77.2021.05.21.13.19.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 May 2021 13:19:15 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson , kernel test robot Subject: [PATCH for-next v7 04/10] RDMA/rxe: Add ib_alloc_mw and ib_dealloc_mw verbs Date: Fri, 21 May 2021 15:18:19 -0500 Message-Id: <20210521201824.659565-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210521201824.659565-1-rpearsonhpe@gmail.com> References: <20210521201824.659565-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add ib_alloc_mw and ib_dealloc_mw verbs APIs. Added new file rxe_mw.c focused on MWs. Changed the 8 bit random key generator. Added a cleanup routine for MWs. Added verbs routines to ib_device_ops. Signed-off-by: Bob Pearson --- v7: Removed duplicate INIT_RDMA_OBJ_SIZE(ib_mw, ...) in rxe_verbs.c. This was already added in patch 03/10. Reported-by: Jason Gunthorp Reported-by: kernel test robot --- drivers/infiniband/sw/rxe/Makefile | 1 + drivers/infiniband/sw/rxe/rxe_loc.h | 6 +++ drivers/infiniband/sw/rxe/rxe_mr.c | 20 +++++----- drivers/infiniband/sw/rxe/rxe_mw.c | 53 +++++++++++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_pool.c | 1 + drivers/infiniband/sw/rxe/rxe_verbs.c | 2 + drivers/infiniband/sw/rxe/rxe_verbs.h | 2 + 7 files changed, 74 insertions(+), 11 deletions(-) create mode 100644 drivers/infiniband/sw/rxe/rxe_mw.c diff --git a/drivers/infiniband/sw/rxe/Makefile b/drivers/infiniband/sw/rxe/Makefile index 66af72dca759..1e24673e9318 100644 --- a/drivers/infiniband/sw/rxe/Makefile +++ b/drivers/infiniband/sw/rxe/Makefile @@ -15,6 +15,7 @@ rdma_rxe-y := \ rxe_qp.o \ rxe_cq.o \ rxe_mr.o \ + rxe_mw.o \ rxe_opcode.o \ rxe_mmap.o \ rxe_icrc.o \ diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index b21038cb370f..422b9481d5f6 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -76,6 +76,7 @@ enum copy_direction { from_mr_obj, }; +u8 rxe_get_next_key(u32 last_key); void rxe_mr_init_dma(struct rxe_pd *pd, int access, struct rxe_mr *mr); int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova, @@ -106,6 +107,11 @@ void rxe_mr_cleanup(struct rxe_pool_entry *arg); int advance_dma_data(struct rxe_dma_info *dma, unsigned int length); +/* rxe_mw.c */ +int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata); +int rxe_dealloc_mw(struct ib_mw *ibmw); +void rxe_mw_cleanup(struct rxe_pool_entry *arg); + /* rxe_net.c */ void rxe_loopback(struct sk_buff *skb); int rxe_send(struct rxe_pkt_info *pkt, struct sk_buff *skb); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 373b46aab043..cfd35a442c10 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -7,19 +7,17 @@ #include "rxe.h" #include "rxe_loc.h" -/* - * lfsr (linear feedback shift register) with period 255 +/* Return a random 8 bit key value that is + * different than the last_key. Set last_key to -1 + * if this is the first key for an MR or MW */ -static u8 rxe_get_key(void) +u8 rxe_get_next_key(u32 last_key) { - static u32 key = 1; - - key = key << 1; - - key |= (0 != (key & 0x100)) ^ (0 != (key & 0x10)) - ^ (0 != (key & 0x80)) ^ (0 != (key & 0x40)); + u8 key; - key &= 0xff; + do { + get_random_bytes(&key, 1); + } while (key == last_key); return key; } @@ -47,7 +45,7 @@ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length) static void rxe_mr_init(int access, struct rxe_mr *mr) { - u32 lkey = mr->pelem.index << 8 | rxe_get_key(); + u32 lkey = mr->pelem.index << 8 | rxe_get_next_key(-1); u32 rkey = (access & IB_ACCESS_REMOTE) ? lkey : 0; mr->ibmr.lkey = lkey; diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c new file mode 100644 index 000000000000..69128e298d44 --- /dev/null +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -0,0 +1,53 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright (c) 2020 Hewlett Packard Enterprise, Inc. All rights reserved. + */ + +#include "rxe.h" + +int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) +{ + struct rxe_mw *mw = to_rmw(ibmw); + struct rxe_pd *pd = to_rpd(ibmw->pd); + struct rxe_dev *rxe = to_rdev(ibmw->device); + int ret; + + rxe_add_ref(pd); + + ret = rxe_add_to_pool(&rxe->mw_pool, mw); + if (ret) { + rxe_drop_ref(pd); + return ret; + } + + rxe_add_index(mw); + ibmw->rkey = (mw->pelem.index << 8) | rxe_get_next_key(-1); + mw->state = (mw->ibmw.type == IB_MW_TYPE_2) ? + RXE_MW_STATE_FREE : RXE_MW_STATE_VALID; + spin_lock_init(&mw->lock); + + return 0; +} + +int rxe_dealloc_mw(struct ib_mw *ibmw) +{ + struct rxe_mw *mw = to_rmw(ibmw); + struct rxe_pd *pd = to_rpd(ibmw->pd); + unsigned long flags; + + spin_lock_irqsave(&mw->lock, flags); + mw->state = RXE_MW_STATE_INVALID; + spin_unlock_irqrestore(&mw->lock, flags); + + rxe_drop_ref(mw); + rxe_drop_ref(pd); + + return 0; +} + +void rxe_mw_cleanup(struct rxe_pool_entry *elem) +{ + struct rxe_mw *mw = container_of(elem, typeof(*mw), pelem); + + rxe_drop_index(mw); +} diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index df0bec719341..0b8e7c6255a2 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -65,6 +65,7 @@ struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { .name = "rxe-mw", .size = sizeof(struct rxe_mw), .elem_offset = offsetof(struct rxe_mw, pelem), + .cleanup = rxe_mw_cleanup, .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, .max_index = RXE_MAX_MW_INDEX, .min_index = RXE_MIN_MW_INDEX, diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 86a0965a88f6..552a1ea9c8b7 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -1060,6 +1060,7 @@ static const struct ib_device_ops rxe_dev_ops = { .alloc_hw_stats = rxe_ib_alloc_hw_stats, .alloc_mr = rxe_alloc_mr, + .alloc_mw = rxe_alloc_mw, .alloc_pd = rxe_alloc_pd, .alloc_ucontext = rxe_alloc_ucontext, .attach_mcast = rxe_attach_mcast, @@ -1069,6 +1070,7 @@ static const struct ib_device_ops rxe_dev_ops = { .create_srq = rxe_create_srq, .create_user_ah = rxe_create_ah, .dealloc_driver = rxe_dealloc, + .dealloc_mw = rxe_dealloc_mw, .dealloc_pd = rxe_dealloc_pd, .dealloc_ucontext = rxe_dealloc_ucontext, .dereg_mr = rxe_dereg_mr, diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 8d32e3f50813..c8597ae8c833 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -323,6 +323,8 @@ enum rxe_mw_state { struct rxe_mw { struct ib_mw ibmw; struct rxe_pool_entry pelem; + spinlock_t lock; + enum rxe_mw_state state; }; struct rxe_mc_grp { From patchwork Fri May 21 20:18:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12273701 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF19CC4707E for ; Fri, 21 May 2021 20:19:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A4E31613E1 for ; Fri, 21 May 2021 20:19:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230269AbhEUUUl (ORCPT ); Fri, 21 May 2021 16:20:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230306AbhEUUUk (ORCPT ); Fri, 21 May 2021 16:20:40 -0400 Received: from mail-oi1-x231.google.com (mail-oi1-x231.google.com [IPv6:2607:f8b0:4864:20::231]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E7D48C0613CE for ; Fri, 21 May 2021 13:19:16 -0700 (PDT) Received: by mail-oi1-x231.google.com with SMTP id s19so20781156oic.7 for ; Fri, 21 May 2021 13:19:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=U5rPLEnmJZPFrJiSUM0o7pDJxLnxJTWiGzt2ZHYvHDE=; b=XrixyVO/rO3KjqwlhwiYq3z+ADX8uSseZj17rq5tCtdzg+lLWBj3UlgBC/zbAmp4o1 asaDs9v4ifrbeU0DWdn0an0TjnGCeH8V7+FPimECZPXhYNkE8FTgHIaBY4hH6H6bBoks ikZyVgDUcL3XLP8jdycJOyV2ItedfZbFpQEfnMV8Ku/5uvyNO0xxOj8m18h3bn91cGPV a82qj+qWRpk+K1nbajXt/slwWuA+ojx6J9K1xB2sWnb9iT8R8uHzLvdxQ04f8uH3k8qe jV1TOxVCt9/VMo9mK0oCg8C8cf1lmIbuay2zCF5Ki4koh/qlW4Q7qsVgSPQl5m/W/8kA 6epw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=U5rPLEnmJZPFrJiSUM0o7pDJxLnxJTWiGzt2ZHYvHDE=; b=HJEYHhsgou0A2APLrgrZn931N+MsFfgVanAdcoG99nWixDVWSxKzkdQT4VZPxqa97O xJVrgU8iCJfL75t0CMsODfZm95yNOUtxyETJulk1/yoYpA1aKo4EsqomF5kR5k+bVGn9 ZNufQJ2kN6xPGP5/sBduDYKSzgidkeZzWBvomkaDM+RdGguPB284OPoI4QdLpvFmqeD+ 6CxADeDh6qpNMjDHfjvvzOCksLJH6kWXphoacLKLhUBoCvo+bXCIFoOiTmcWUb1eXXlb XaWjDMnsQx6UGlOErrbULWIRSSiMA2lERAFsX9oBKJmu5QWigHlVfznY3JAW9U3yUFqd SoUA== X-Gm-Message-State: AOAM531mddq52Qr9UkMjnurHcST9L2h/AS8p0rrAR308MQbikcThkNyG ghi/C2I93doi/CahEeVM568= X-Google-Smtp-Source: ABdhPJxIU8kIZnreyLHRMTzlj2hsAdoDNdmPxVjUXhMUjQ/74OLEoIuHQdDGZ6+1aiT60+otVoWD9Q== X-Received: by 2002:aca:f354:: with SMTP id r81mr3547444oih.4.1621628356390; Fri, 21 May 2021 13:19:16 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-7300-72eb-72bd-e6db.res6.spectrum.com. [2603:8081:140c:1a00:7300:72eb:72bd:e6db]) by smtp.gmail.com with ESMTPSA id r124sm1337245oig.38.2021.05.21.13.19.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 May 2021 13:19:16 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v7 05/10] RDMA/rxe: Replace WR_REG_MASK by WR_LOCAL_OP_MASK Date: Fri, 21 May 2021 15:18:20 -0500 Message-Id: <20210521201824.659565-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210521201824.659565-1-rpearsonhpe@gmail.com> References: <20210521201824.659565-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Rxe has two mask bits WR_LOCAL_MASK and WR_REG_MASK with WR_REG_MASK used to indicate any local operation and WR_LOCAL_MASK unused. This patch replaces both of these with one mask bit WR_LOCAL_OP_MASK which is clearer. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_opcode.c | 4 ++-- drivers/infiniband/sw/rxe/rxe_opcode.h | 3 +-- drivers/infiniband/sw/rxe/rxe_req.c | 2 +- drivers/infiniband/sw/rxe/rxe_verbs.c | 2 +- 4 files changed, 5 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c index 0cb4b01fd910..1e4b67b048f3 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.c +++ b/drivers/infiniband/sw/rxe/rxe_opcode.c @@ -87,13 +87,13 @@ struct rxe_wr_opcode_info rxe_wr_opcode_info[] = { [IB_WR_LOCAL_INV] = { .name = "IB_WR_LOCAL_INV", .mask = { - [IB_QPT_RC] = WR_REG_MASK, + [IB_QPT_RC] = WR_LOCAL_OP_MASK, }, }, [IB_WR_REG_MR] = { .name = "IB_WR_REG_MR", .mask = { - [IB_QPT_RC] = WR_REG_MASK, + [IB_QPT_RC] = WR_LOCAL_OP_MASK, }, }, }; diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.h b/drivers/infiniband/sw/rxe/rxe_opcode.h index 1041ac9a9233..e02f039b8c44 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.h +++ b/drivers/infiniband/sw/rxe/rxe_opcode.h @@ -19,8 +19,7 @@ enum rxe_wr_mask { WR_SEND_MASK = BIT(2), WR_READ_MASK = BIT(3), WR_WRITE_MASK = BIT(4), - WR_LOCAL_MASK = BIT(5), - WR_REG_MASK = BIT(6), + WR_LOCAL_OP_MASK = BIT(5), WR_READ_OR_WRITE_MASK = WR_READ_MASK | WR_WRITE_MASK, WR_READ_WRITE_OR_SEND_MASK = WR_READ_OR_WRITE_MASK | WR_SEND_MASK, diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 3664cdae7e1f..0d4dcd514c55 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -593,7 +593,7 @@ int rxe_requester(void *arg) if (unlikely(!wqe)) goto exit; - if (wqe->mask & WR_REG_MASK) { + if (wqe->mask & WR_LOCAL_OP_MASK) { if (wqe->wr.opcode == IB_WR_LOCAL_INV) { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct rxe_mr *rmr; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 552a1ea9c8b7..4860e8ab378e 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -577,7 +577,7 @@ static void init_send_wqe(struct rxe_qp *qp, const struct ib_send_wr *ibwr, init_send_wr(qp, &wqe->wr, ibwr); /* local operation */ - if (unlikely(mask & WR_REG_MASK)) { + if (unlikely(mask & WR_LOCAL_OP_MASK)) { wqe->mask = mask; wqe->state = wqe_state_posted; return; From patchwork Fri May 21 20:18:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12273703 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06027C4707A for ; Fri, 21 May 2021 20:19:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E4E93613E1 for ; Fri, 21 May 2021 20:19:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230292AbhEUUUm (ORCPT ); Fri, 21 May 2021 16:20:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230306AbhEUUUl (ORCPT ); Fri, 21 May 2021 16:20:41 -0400 Received: from mail-ot1-x333.google.com (mail-ot1-x333.google.com [IPv6:2607:f8b0:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19506C061574 for ; Fri, 21 May 2021 13:19:18 -0700 (PDT) Received: by mail-ot1-x333.google.com with SMTP id n32-20020a9d1ea30000b02902a53d6ad4bdso19145057otn.3 for ; Fri, 21 May 2021 13:19:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZMY8b1tjz2YG/shEnfos8gYNdvZROrbl7LCopGoAwvk=; b=VfM4Qh+JNsNssmTWcLLXemAS8yxqBJJXQU/vjI5ib9tQ/hvxR/8/3UMsjWC9sUP8wD 6m6+7cwo598t8sHuyV82DKSi6DQ8V+TVVTfSxmIvdg6krlxlR2y9KVihLsSSAxeVn15a CRewT8rf5FJ2v1JDuUK2jyXl3YFnEKfyNa8lYedgWjTFJQ/q9MsGAWJ6tEO2Rtru1nwr O4Q3blY7L05uwUCZQ7oq9iTIDa27wEF9iGagVxDHrQ15IJQEJhVBCjBjq3IlKLejC6KR HfSyWpwVFGUq4VUNDr5G43H15z2NAINizwEH2K7pIE/uLoQr14LEzDKlsjGXzjICNooz colw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZMY8b1tjz2YG/shEnfos8gYNdvZROrbl7LCopGoAwvk=; b=AQ+uZgwIHdlATR94C4ZEpp+bIQlfQLMfp8iywYvAy75IWuqEXep6M1K/dUVYxY8GgX +dRBahod+MkF3BIFm4EjSdClwL2GQMu4gxuuA+mpv+oFKaOvJ267UZoYHvFz1BCTlY4P QggQrE1m8cZNN9RVKhH6UhQET4pXNY12c2hvmJ/jQ0v6Jc5GwHPm/dom9HT4PoKcHDSd a6TdS8bV58JXUBIo4UV1KylVkYVX3stn0k/UuARS6Gr4sFHNHB6Cn0Bip874h7T1msHL x44v+LPBGX+rgBkHdLJUbDadyVJXHq8r84uwGpbsD4UT8CDzYnjPIdESs2p3CihoQyvK Y/gw== X-Gm-Message-State: AOAM530pmc28BasXpfcy25ihJfIK8H33xCFd/YFdaxA0j+eFYSpG/h8R T31O66XpYSJTlvfcFiJ+YlU= X-Google-Smtp-Source: ABdhPJx0SaEjj91/yDTMGY7vPZfgKL7P0iQVEJx3DIBTlW2MZf+omfle59z4+A5g/1jSSGTbB/o3Mg== X-Received: by 2002:a9d:7a99:: with SMTP id l25mr9797731otn.97.1621628357530; Fri, 21 May 2021 13:19:17 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-7300-72eb-72bd-e6db.res6.spectrum.com. [2603:8081:140c:1a00:7300:72eb:72bd:e6db]) by smtp.gmail.com with ESMTPSA id l186sm1314822oih.55.2021.05.21.13.19.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 May 2021 13:19:17 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v7 06/10] RDMA/rxe: Move local ops to subroutine Date: Fri, 21 May 2021 15:18:21 -0500 Message-Id: <20210521201824.659565-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210521201824.659565-1-rpearsonhpe@gmail.com> References: <20210521201824.659565-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Simplify rxe_requester() by moving the local operations to a subroutine. Add an error return for illegal send WR opcode. Moved next_index ahead of rxe_run_task which fixed a small bug where work completions were delayed until after the next wqe which was not the intended behavior. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_req.c | 89 +++++++++++++++++------------ 1 file changed, 54 insertions(+), 35 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 0d4dcd514c55..9163aac41848 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -555,6 +555,56 @@ static void update_state(struct rxe_qp *qp, struct rxe_send_wqe *wqe, jiffies + qp->qp_timeout_jiffies); } +static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe) +{ + u8 opcode = wqe->wr.opcode; + struct rxe_dev *rxe; + struct rxe_mr *mr; + u32 rkey; + + switch (opcode) { + case IB_WR_LOCAL_INV: + rxe = to_rdev(qp->ibqp.device); + rkey = wqe->wr.ex.invalidate_rkey; + mr = rxe_pool_get_index(&rxe->mr_pool, rkey >> 8); + if (!mr) { + pr_err("No MR for rkey %#x\n", rkey); + wqe->state = wqe_state_error; + wqe->status = IB_WC_LOC_QP_OP_ERR; + return -EINVAL; + } + mr->state = RXE_MR_STATE_FREE; + rxe_drop_ref(mr); + break; + case IB_WR_REG_MR: + mr = to_rmr(wqe->wr.wr.reg.mr); + + rxe_add_ref(mr); + mr->state = RXE_MR_STATE_VALID; + mr->access = wqe->wr.wr.reg.access; + mr->ibmr.lkey = wqe->wr.wr.reg.key; + mr->ibmr.rkey = wqe->wr.wr.reg.key; + mr->iova = wqe->wr.wr.reg.mr->iova; + rxe_drop_ref(mr); + break; + default: + pr_err("Unexpected send wqe opcode %d\n", opcode); + wqe->state = wqe_state_error; + wqe->status = IB_WC_LOC_QP_OP_ERR; + return -EINVAL; + } + + wqe->state = wqe_state_done; + wqe->status = IB_WC_SUCCESS; + qp->req.wqe_index = next_index(qp->sq.queue, qp->req.wqe_index); + + if ((wqe->wr.send_flags & IB_SEND_SIGNALED) || + qp->sq_sig_type == IB_SIGNAL_ALL_WR) + rxe_run_task(&qp->comp.task, 1); + + return 0; +} + int rxe_requester(void *arg) { struct rxe_qp *qp = (struct rxe_qp *)arg; @@ -594,42 +644,11 @@ int rxe_requester(void *arg) goto exit; if (wqe->mask & WR_LOCAL_OP_MASK) { - if (wqe->wr.opcode == IB_WR_LOCAL_INV) { - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); - struct rxe_mr *rmr; - - rmr = rxe_pool_get_index(&rxe->mr_pool, - wqe->wr.ex.invalidate_rkey >> 8); - if (!rmr) { - pr_err("No mr for key %#x\n", - wqe->wr.ex.invalidate_rkey); - wqe->state = wqe_state_error; - wqe->status = IB_WC_MW_BIND_ERR; - goto exit; - } - rmr->state = RXE_MR_STATE_FREE; - rxe_drop_ref(rmr); - wqe->state = wqe_state_done; - wqe->status = IB_WC_SUCCESS; - } else if (wqe->wr.opcode == IB_WR_REG_MR) { - struct rxe_mr *rmr = to_rmr(wqe->wr.wr.reg.mr); - - rmr->state = RXE_MR_STATE_VALID; - rmr->access = wqe->wr.wr.reg.access; - rmr->ibmr.lkey = wqe->wr.wr.reg.key; - rmr->ibmr.rkey = wqe->wr.wr.reg.key; - rmr->iova = wqe->wr.wr.reg.mr->iova; - wqe->state = wqe_state_done; - wqe->status = IB_WC_SUCCESS; - } else { + ret = rxe_do_local_ops(qp, wqe); + if (unlikely(ret)) goto exit; - } - if ((wqe->wr.send_flags & IB_SEND_SIGNALED) || - qp->sq_sig_type == IB_SIGNAL_ALL_WR) - rxe_run_task(&qp->comp.task, 1); - qp->req.wqe_index = next_index(qp->sq.queue, - qp->req.wqe_index); - goto next_wqe; + else + goto next_wqe; } if (unlikely(qp_type(qp) == IB_QPT_RC && From patchwork Fri May 21 20:18:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12273709 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEB3EC04FF3 for ; Fri, 21 May 2021 20:19:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B6BCE613E1 for ; Fri, 21 May 2021 20:19:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230382AbhEUUUo (ORCPT ); Fri, 21 May 2021 16:20:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36260 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230341AbhEUUUn (ORCPT ); Fri, 21 May 2021 16:20:43 -0400 Received: from mail-oi1-x22a.google.com (mail-oi1-x22a.google.com [IPv6:2607:f8b0:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6E7EC061574 for ; Fri, 21 May 2021 13:19:18 -0700 (PDT) Received: by mail-oi1-x22a.google.com with SMTP id b25so20859718oic.0 for ; Fri, 21 May 2021 13:19:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=50Wh7+ShAsbPgIHkTAlMvwyR4I7Vad9kHz8yedn0VDw=; b=TuR7KXEtoNW6+lLyYkWzadxnfGYZJzhrOzPUUsKc5yoEgS6sqrILTfLjqTTvyX38oZ 7Vl77HyIYQ+RTlGQER0xekFCN6lGZ5zcYaYeO6oi5FGvoj47zABAZj8o5m6YvoCp1iRo f7fuLk+3n5m2wJkJEFZJawqW5jtsYnvUoIptYmd0YEpPelkHzhi4156HXbbNA8KU/8ot BRABC+S/4yoez95Z7LoMUEd37MGWZTMTTxnRaMpaJcyUOTYJmnuGG84q56Xw9Dezo3sw X/ckO2843s0l5913jlVcBfIBQvEvvwBaR+VRxJLn1rcRGPKIr36HpN2exY2ot9J1kDUY Misg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=50Wh7+ShAsbPgIHkTAlMvwyR4I7Vad9kHz8yedn0VDw=; b=TI4EAIEWnA3cKrxMRvMZJ+7GYDfiqCZE1GrSeJVhYlmV+ZmxxqLZDQTDKkQ/cW3OYi 1Fy3rht/rkCL9COviHW2N2m25E6EZgfeeJmFHCj8D+VNsgntLLfjrmzFN8sht3Ow2F76 8gOdEAwVJ+jFdEVWkvjtqEog2PCA+r3pu+i5JAshlOvjmB5z+O3nKr4nTJ1i2FiaKOCq TVqbVXesgqjWiH3OnEbumAwLTautk1ZN8Icg10oEL17+v5B1aI1fUPGeJXRThhgweVis dhGEoGljXqmSES66MtBAJW8mZTUuxmJ16yDpNgVFckz8yEncV3qjP7qeNVHUhnO1U0eD T2EA== X-Gm-Message-State: AOAM530MxqAPVPC2X/k/6c+6I19YqwdcnUAiEZ+umebLrmn9zsWoqOaK lM8H50st7sYXtFbRn3bsAqo6k05Xx97UFA== X-Google-Smtp-Source: ABdhPJxToBR1V4U9WNbxTw/FvAG/v47K7vL+k/0IAXKUeGeAnC0QbRgbPiwfAbTypY/sUFMaRCc1hw== X-Received: by 2002:a05:6808:1146:: with SMTP id u6mr3449449oiu.130.1621628358310; Fri, 21 May 2021 13:19:18 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-7300-72eb-72bd-e6db.res6.spectrum.com. [2603:8081:140c:1a00:7300:72eb:72bd:e6db]) by smtp.gmail.com with ESMTPSA id 88sm1452701ott.51.2021.05.21.13.19.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 May 2021 13:19:18 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v7 07/10] RDMA/rxe: Add support for bind MW work requests Date: Fri, 21 May 2021 15:18:22 -0500 Message-Id: <20210521201824.659565-8-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210521201824.659565-1-rpearsonhpe@gmail.com> References: <20210521201824.659565-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add support for bind MW work requests from user space. Since rdma/core does not support bind mw in ib_send_wr there is no way to support bind mw in kernel space. Added bind_mw local operation in rxe_req.c. Added bind_mw WR operation in rxe_opcode.c. Added bind_mw WC in rxe_comp.c. Added additional fields to rxe_mw in rxe_verbs.h. Added rxe_do_dealloc_mw() subroutine to cleanup an mw when rxe_dealloc_mw is called. Added code to implement bind_mw operation in rxe_mw.c Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 1 + drivers/infiniband/sw/rxe/rxe_loc.h | 1 + drivers/infiniband/sw/rxe/rxe_mw.c | 202 ++++++++++++++++++++++++- drivers/infiniband/sw/rxe/rxe_opcode.c | 7 + drivers/infiniband/sw/rxe/rxe_req.c | 9 ++ drivers/infiniband/sw/rxe/rxe_verbs.h | 15 +- 6 files changed, 230 insertions(+), 5 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index 2af26737d32d..bc5488af5f55 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -103,6 +103,7 @@ static enum ib_wc_opcode wr_to_wc_opcode(enum ib_wr_opcode opcode) case IB_WR_RDMA_READ_WITH_INV: return IB_WC_RDMA_READ; case IB_WR_LOCAL_INV: return IB_WC_LOCAL_INV; case IB_WR_REG_MR: return IB_WC_REG_MR; + case IB_WR_BIND_MW: return IB_WC_BIND_MW; default: return 0xff; diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 422b9481d5f6..e97048e77451 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -110,6 +110,7 @@ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length); /* rxe_mw.c */ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata); int rxe_dealloc_mw(struct ib_mw *ibmw); +int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe); void rxe_mw_cleanup(struct rxe_pool_entry *arg); /* rxe_net.c */ diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 69128e298d44..65215dde9974 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -29,6 +29,29 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) return 0; } +static void rxe_do_dealloc_mw(struct rxe_mw *mw) +{ + if (mw->mr) { + struct rxe_mr *mr = mw->mr; + + mw->mr = NULL; + atomic_dec(&mr->num_mw); + rxe_drop_ref(mr); + } + + if (mw->qp) { + struct rxe_qp *qp = mw->qp; + + mw->qp = NULL; + rxe_drop_ref(qp); + } + + mw->access = 0; + mw->addr = 0; + mw->length = 0; + mw->state = RXE_MW_STATE_INVALID; +} + int rxe_dealloc_mw(struct ib_mw *ibmw) { struct rxe_mw *mw = to_rmw(ibmw); @@ -36,7 +59,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw) unsigned long flags; spin_lock_irqsave(&mw->lock, flags); - mw->state = RXE_MW_STATE_INVALID; + rxe_do_dealloc_mw(mw); spin_unlock_irqrestore(&mw->lock, flags); rxe_drop_ref(mw); @@ -45,6 +68,183 @@ int rxe_dealloc_mw(struct ib_mw *ibmw) return 0; } +static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe, + struct rxe_mw *mw, struct rxe_mr *mr) +{ + if (mw->ibmw.type == IB_MW_TYPE_1) { + if (unlikely(mw->state != RXE_MW_STATE_VALID)) { + pr_err_once( + "attempt to bind a type 1 MW not in the valid state\n"); + return -EINVAL; + } + + /* o10-36.2.2 */ + if (unlikely((mw->access & IB_ZERO_BASED))) { + pr_err_once("attempt to bind a zero based type 1 MW\n"); + return -EINVAL; + } + } + + if (mw->ibmw.type == IB_MW_TYPE_2) { + /* o10-37.2.30 */ + if (unlikely(mw->state != RXE_MW_STATE_FREE)) { + pr_err_once( + "attempt to bind a type 2 MW not in the free state\n"); + return -EINVAL; + } + + /* C10-72 */ + if (unlikely(qp->pd != to_rpd(mw->ibmw.pd))) { + pr_err_once( + "attempt to bind type 2 MW with qp with different PD\n"); + return -EINVAL; + } + + /* o10-37.2.40 */ + if (unlikely(!mr || wqe->wr.wr.mw.length == 0)) { + pr_err_once( + "attempt to invalidate type 2 MW by binding with NULL or zero length MR\n"); + return -EINVAL; + } + } + + if (unlikely((wqe->wr.wr.mw.rkey & 0xff) == (mw->ibmw.rkey & 0xff))) { + pr_err_once("attempt to bind MW with same key\n"); + return -EINVAL; + } + + /* remaining checks only apply to a nonzero MR */ + if (!mr) + return 0; + + if (unlikely(mr->access & IB_ZERO_BASED)) { + pr_err_once("attempt to bind MW to zero based MR\n"); + return -EINVAL; + } + + /* C10-73 */ + if (unlikely(!(mr->access & IB_ACCESS_MW_BIND))) { + pr_err_once( + "attempt to bind an MW to an MR without bind access\n"); + return -EINVAL; + } + + /* C10-74 */ + if (unlikely((mw->access & + (IB_ACCESS_REMOTE_WRITE | IB_ACCESS_REMOTE_ATOMIC)) && + !(mr->access & IB_ACCESS_LOCAL_WRITE))) { + pr_err_once( + "attempt to bind an writeable MW to an MR without local write access\n"); + return -EINVAL; + } + + /* C10-75 */ + if (mw->access & IB_ZERO_BASED) { + if (unlikely(wqe->wr.wr.mw.length > mr->length)) { + pr_err_once( + "attempt to bind a ZB MW outside of the MR\n"); + return -EINVAL; + } + } else { + if (unlikely((wqe->wr.wr.mw.addr < mr->iova) || + ((wqe->wr.wr.mw.addr + wqe->wr.wr.mw.length) > + (mr->iova + mr->length)))) { + pr_err_once( + "attempt to bind a VA MW outside of the MR\n"); + return -EINVAL; + } + } + + return 0; +} + +static void rxe_do_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe, + struct rxe_mw *mw, struct rxe_mr *mr) +{ + u32 rkey; + u32 new_rkey; + + rkey = mw->ibmw.rkey; + new_rkey = (rkey & 0xffffff00) | (wqe->wr.wr.mw.rkey & 0x000000ff); + + mw->ibmw.rkey = new_rkey; + mw->access = wqe->wr.wr.mw.access; + mw->state = RXE_MW_STATE_VALID; + mw->addr = wqe->wr.wr.mw.addr; + mw->length = wqe->wr.wr.mw.length; + + if (mw->mr) { + rxe_drop_ref(mw->mr); + atomic_dec(&mw->mr->num_mw); + mw->mr = NULL; + } + + if (mw->length) { + mw->mr = mr; + atomic_inc(&mr->num_mw); + rxe_add_ref(mr); + } + + if (mw->ibmw.type == IB_MW_TYPE_2) { + rxe_add_ref(qp); + mw->qp = qp; + } +} + +int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe) +{ + int ret; + struct rxe_mw *mw; + struct rxe_mr *mr; + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + unsigned long flags; + + mw = rxe_pool_get_index(&rxe->mw_pool, + wqe->wr.wr.mw.mw_rkey >> 8); + if (unlikely(!mw)) { + ret = -EINVAL; + goto err; + } + + if (unlikely(mw->ibmw.rkey != wqe->wr.wr.mw.mw_rkey)) { + ret = -EINVAL; + goto err_drop_mw; + } + + if (likely(wqe->wr.wr.mw.length)) { + mr = rxe_pool_get_index(&rxe->mr_pool, + wqe->wr.wr.mw.mr_lkey >> 8); + if (unlikely(!mr)) { + ret = -EINVAL; + goto err_drop_mw; + } + + if (unlikely(mr->ibmr.lkey != wqe->wr.wr.mw.mr_lkey)) { + ret = -EINVAL; + goto err_drop_mr; + } + } else { + mr = NULL; + } + + spin_lock_irqsave(&mw->lock, flags); + + ret = rxe_check_bind_mw(qp, wqe, mw, mr); + if (ret) + goto err_unlock; + + rxe_do_bind_mw(qp, wqe, mw, mr); +err_unlock: + spin_unlock_irqrestore(&mw->lock, flags); +err_drop_mr: + if (mr) + rxe_drop_ref(mr); +err_drop_mw: + rxe_drop_ref(mw); +err: + return ret; +} + void rxe_mw_cleanup(struct rxe_pool_entry *elem) { struct rxe_mw *mw = container_of(elem, typeof(*mw), pelem); diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c index 1e4b67b048f3..3ef5a10a6efd 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.c +++ b/drivers/infiniband/sw/rxe/rxe_opcode.c @@ -96,6 +96,13 @@ struct rxe_wr_opcode_info rxe_wr_opcode_info[] = { [IB_QPT_RC] = WR_LOCAL_OP_MASK, }, }, + [IB_WR_BIND_MW] = { + .name = "IB_WR_BIND_MW", + .mask = { + [IB_QPT_RC] = WR_LOCAL_OP_MASK, + [IB_QPT_UC] = WR_LOCAL_OP_MASK, + }, + }, }; struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 9163aac41848..29fa41b4d2ac 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -561,6 +561,7 @@ static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe) struct rxe_dev *rxe; struct rxe_mr *mr; u32 rkey; + int ret; switch (opcode) { case IB_WR_LOCAL_INV: @@ -587,6 +588,14 @@ static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe) mr->iova = wqe->wr.wr.reg.mr->iova; rxe_drop_ref(mr); break; + case IB_WR_BIND_MW: + ret = rxe_bind_mw(qp, wqe); + if (ret) { + wqe->state = wqe_state_error; + wqe->status = IB_WC_MW_BIND_ERR; + return -EINVAL; + } + break; default: pr_err("Unexpected send wqe opcode %d\n", opcode); wqe->state = wqe_state_error; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index c8597ae8c833..7da47b8c707b 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -312,6 +312,8 @@ struct rxe_mr { u32 num_map; struct rxe_map **map; + + atomic_t num_mw; }; enum rxe_mw_state { @@ -321,10 +323,15 @@ enum rxe_mw_state { }; struct rxe_mw { - struct ib_mw ibmw; - struct rxe_pool_entry pelem; - spinlock_t lock; - enum rxe_mw_state state; + struct ib_mw ibmw; + struct rxe_pool_entry pelem; + spinlock_t lock; + enum rxe_mw_state state; + struct rxe_qp *qp; /* Type 2 only */ + struct rxe_mr *mr; + int access; + u64 addr; + u64 length; }; struct rxe_mc_grp { From patchwork Fri May 21 20:18:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12273705 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1B61C4707E for ; Fri, 21 May 2021 20:19:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B075C613E1 for ; Fri, 21 May 2021 20:19:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230378AbhEUUUo (ORCPT ); Fri, 21 May 2021 16:20:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230354AbhEUUUn (ORCPT ); Fri, 21 May 2021 16:20:43 -0400 Received: from mail-oi1-x22e.google.com (mail-oi1-x22e.google.com [IPv6:2607:f8b0:4864:20::22e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B74D3C0613ED for ; Fri, 21 May 2021 13:19:19 -0700 (PDT) Received: by mail-oi1-x22e.google.com with SMTP id j75so20752503oih.10 for ; Fri, 21 May 2021 13:19:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9gRO+m/pFEvMpWGi2WohltlaUnfd1BAtzodWmnDrhS8=; b=q18ZI4bdePSjEtSXPQlwYJAatPKgBnWdv26wyXwNqa2l6qS/jBPClFGjNTltN7og2x h+t0Mwi7g7cChkQdyx166e5VY4aiDFwoHHLbXhOTOr9/4jKsaeV7JQtl9K9bQ+vcFfzM k2otg/qYqJwtgn0ClUWaN9u+lHS4pWZM0nZgeE5n1I1s0NmhTCoctB2ML5UvSiedQ/XN rqi+t6gsuX3ohH/nlyQvb4iOX+Qn+lv6TwmyEenGcc5AP2ciFV4XmP4OmGsrJDUPwgwa JqyRyKBKSTh1l29XsZnVPt7j4x2vJ3+gT2Sj/xoG9vGzOsjrtcuI1B+WYcE/jEtyBZQm 3uLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9gRO+m/pFEvMpWGi2WohltlaUnfd1BAtzodWmnDrhS8=; b=iDcltWO0sP8HJpyauHPTHcMVb8cJcuO1nVT21vwGknWfE6h3NckStuUf7X3MYRBjnr 86eSUURyntxMkfcjgzhxQsqOCHCDr1P+Husfs99u+56ImCORFks2trwFLyER/gMN990i nxqyPXNfTnP+J6IGfVzoFAkJiH3FUtDG/hb3EqRu8JMlxyf9HrcXvB463cOs1KjtSvfk etU8ZfKDxe8v/oG8dYhfIKKtsEzR2pqxeLZx769CeOIQcFQCGGbPuh6cFG6ZudExiWxx VIC7j4TsXdp0WL+6BMhDYCFUrn5y2wZbNLvWjLH8VU542hIjw//n8k7ueZvL2HTiTcTn aOrA== X-Gm-Message-State: AOAM532IRgiENxc0Y2EfGoSWSZszlSw0t51gtZlmECEDifBnl7c5LLhj 2gpu7PlBNKBP8nqzO0+ybAs= X-Google-Smtp-Source: ABdhPJzeJcm1mGSrFMuIMrYHV1YDY3cgCft2GWo8csUmVsHUl9AJyN32U3B0/b3LobxnCOOW2Rlahg== X-Received: by 2002:aca:754e:: with SMTP id q75mr3405537oic.73.1621628359112; Fri, 21 May 2021 13:19:19 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-7300-72eb-72bd-e6db.res6.spectrum.com. [2603:8081:140c:1a00:7300:72eb:72bd:e6db]) by smtp.gmail.com with ESMTPSA id t22sm1559011otd.25.2021.05.21.13.19.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 May 2021 13:19:18 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v7 08/10] RDMA/rxe: Implement invalidate MW operations Date: Fri, 21 May 2021 15:18:23 -0500 Message-Id: <20210521201824.659565-9-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210521201824.659565-1-rpearsonhpe@gmail.com> References: <20210521201824.659565-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Implement invalidate MW and cleaned up invalidate MR operations. Added code to perform remote invalidate for send with invalidate. Added code to perform local invalidation. Deleted some blank lines in rxe_loc.h. Signed-off-by: Bob Pearson --- v6: Added rxe_ to subroutine names in lines that changed. v3: Replaced enums in lower case with upper case and moved to rxe_verbs.h which is where enums live. --- drivers/infiniband/sw/rxe/rxe_comp.c | 4 +- drivers/infiniband/sw/rxe/rxe_loc.h | 29 ++-------- drivers/infiniband/sw/rxe/rxe_mr.c | 81 ++++++++++++++++++--------- drivers/infiniband/sw/rxe/rxe_mw.c | 67 ++++++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_req.c | 24 ++++---- drivers/infiniband/sw/rxe/rxe_resp.c | 60 ++++++++++++-------- drivers/infiniband/sw/rxe/rxe_verbs.h | 33 ++++++++--- 7 files changed, 204 insertions(+), 94 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index bc5488af5f55..207aa7ef52c4 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -346,7 +346,7 @@ static inline enum comp_state do_read(struct rxe_qp *qp, ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &wqe->dma, payload_addr(pkt), - payload_size(pkt), to_mr_obj, NULL); + payload_size(pkt), RXE_TO_MR_OBJ, NULL); if (ret) return COMPST_ERROR; @@ -366,7 +366,7 @@ static inline enum comp_state do_atomic(struct rxe_qp *qp, ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &wqe->dma, &atomic_orig, - sizeof(u64), to_mr_obj, NULL); + sizeof(u64), RXE_TO_MR_OBJ, NULL); if (ret) return COMPST_ERROR; else diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index e97048e77451..8a4633aeb2a8 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -71,46 +71,29 @@ struct rxe_mmap_info *rxe_create_mmap_info(struct rxe_dev *dev, u32 size, int rxe_mmap(struct ib_ucontext *context, struct vm_area_struct *vma); /* rxe_mr.c */ -enum copy_direction { - to_mr_obj, - from_mr_obj, -}; - u8 rxe_get_next_key(u32 last_key); void rxe_mr_init_dma(struct rxe_pd *pd, int access, struct rxe_mr *mr); - int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova, int access, struct rxe_mr *mr); - int rxe_mr_init_fast(struct rxe_pd *pd, int max_pages, struct rxe_mr *mr); - int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, - enum copy_direction dir, u32 *crcp); - + enum rxe_mr_copy_dir dir, u32 *crcp); int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, void *addr, int length, - enum copy_direction dir, u32 *crcp); - + enum rxe_mr_copy_dir dir, u32 *crcp); void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length); - -enum lookup_type { - lookup_local, - lookup_remote, -}; - struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, - enum lookup_type type); - + enum rxe_mr_lookup_type type); int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length); - -void rxe_mr_cleanup(struct rxe_pool_entry *arg); - int advance_dma_data(struct rxe_dma_info *dma, unsigned int length); +int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey); +void rxe_mr_cleanup(struct rxe_pool_entry *arg); /* rxe_mw.c */ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata); int rxe_dealloc_mw(struct ib_mw *ibmw); int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe); +int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey); void rxe_mw_cleanup(struct rxe_pool_entry *arg); /* rxe_net.c */ diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index cfd35a442c10..3fb58d2c7814 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -55,21 +55,6 @@ static void rxe_mr_init(int access, struct rxe_mr *mr) mr->map_shift = ilog2(RXE_BUF_PER_MAP); } -void rxe_mr_cleanup(struct rxe_pool_entry *arg) -{ - struct rxe_mr *mr = container_of(arg, typeof(*mr), pelem); - int i; - - ib_umem_release(mr->umem); - - if (mr->map) { - for (i = 0; i < mr->num_map; i++) - kfree(mr->map[i]); - - kfree(mr->map); - } -} - static int rxe_mr_alloc(struct rxe_mr *mr, int num_buf) { int i; @@ -298,7 +283,7 @@ void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length) * crc32 if crcp is not zero. caller must hold a reference to mr */ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, - enum copy_direction dir, u32 *crcp) + enum rxe_mr_copy_dir dir, u32 *crcp) { int err; int bytes; @@ -316,9 +301,9 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, if (mr->type == RXE_MR_TYPE_DMA) { u8 *src, *dest; - src = (dir == to_mr_obj) ? addr : ((void *)(uintptr_t)iova); + src = (dir == RXE_TO_MR_OBJ) ? addr : ((void *)(uintptr_t)iova); - dest = (dir == to_mr_obj) ? ((void *)(uintptr_t)iova) : addr; + dest = (dir == RXE_TO_MR_OBJ) ? ((void *)(uintptr_t)iova) : addr; memcpy(dest, src, length); @@ -346,8 +331,8 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, u8 *src, *dest; va = (u8 *)(uintptr_t)buf->addr + offset; - src = (dir == to_mr_obj) ? addr : va; - dest = (dir == to_mr_obj) ? va : addr; + src = (dir == RXE_TO_MR_OBJ) ? addr : va; + dest = (dir == RXE_TO_MR_OBJ) ? va : addr; bytes = buf->size - offset; @@ -392,7 +377,7 @@ int copy_data( struct rxe_dma_info *dma, void *addr, int length, - enum copy_direction dir, + enum rxe_mr_copy_dir dir, u32 *crcp) { int bytes; @@ -412,7 +397,7 @@ int copy_data( } if (sge->length && (offset < sge->length)) { - mr = lookup_mr(pd, access, sge->lkey, lookup_local); + mr = lookup_mr(pd, access, sge->lkey, RXE_LOOKUP_LOCAL); if (!mr) { err = -EINVAL; goto err1; @@ -438,7 +423,7 @@ int copy_data( if (sge->length) { mr = lookup_mr(pd, access, sge->lkey, - lookup_local); + RXE_LOOKUP_LOCAL); if (!mr) { err = -EINVAL; goto err1; @@ -520,7 +505,7 @@ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length) * (4) verify that mr state is valid */ struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, - enum lookup_type type) + enum rxe_mr_lookup_type type) { struct rxe_mr *mr; struct rxe_dev *rxe = to_rdev(pd->ibpd.device); @@ -530,8 +515,8 @@ struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, if (!mr) return NULL; - if (unlikely((type == lookup_local && mr_lkey(mr) != key) || - (type == lookup_remote && mr_rkey(mr) != key) || + if (unlikely((type == RXE_LOOKUP_LOCAL && mr_lkey(mr) != key) || + (type == RXE_LOOKUP_REMOTE && mr_rkey(mr) != key) || mr_pd(mr) != pd || (access && !(access & mr->access)) || mr->state != RXE_MR_STATE_VALID)) { rxe_drop_ref(mr); @@ -540,3 +525,47 @@ struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, return mr; } + +int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + struct rxe_mr *mr; + int ret; + + mr = rxe_pool_get_index(&rxe->mr_pool, rkey >> 8); + if (!mr) { + pr_err("%s: No MR for rkey %#x\n", __func__, rkey); + ret = -EINVAL; + goto err; + } + + if (rkey != mr->ibmr.rkey) { + pr_err("%s: rkey (%#x) doesn't match mr->ibmr.rkey (%#x)\n", + __func__, rkey, mr->ibmr.rkey); + ret = -EINVAL; + goto err_drop_ref; + } + + mr->state = RXE_MR_STATE_FREE; + ret = 0; + +err_drop_ref: + rxe_drop_ref(mr); +err: + return ret; +} + +void rxe_mr_cleanup(struct rxe_pool_entry *arg) +{ + struct rxe_mr *mr = container_of(arg, typeof(*mr), pelem); + int i; + + ib_umem_release(mr->umem); + + if (mr->map) { + for (i = 0; i < mr->num_map; i++) + kfree(mr->map[i]); + + kfree(mr->map); + } +} diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 65215dde9974..594f8cef0a08 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -245,6 +245,73 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe) return ret; } +static int rxe_check_invalidate_mw(struct rxe_qp *qp, struct rxe_mw *mw) +{ + if (unlikely(mw->state == RXE_MW_STATE_INVALID)) + return -EINVAL; + + /* o10-37.2.26 */ + if (unlikely(mw->ibmw.type == IB_MW_TYPE_1)) + return -EINVAL; + + return 0; +} + +static void rxe_do_invalidate_mw(struct rxe_mw *mw) +{ + struct rxe_qp *qp; + struct rxe_mr *mr; + + /* valid type 2 MW will always have a QP pointer */ + qp = mw->qp; + mw->qp = NULL; + rxe_drop_ref(qp); + + /* valid type 2 MW will always have an MR pointer */ + mr = mw->mr; + mw->mr = NULL; + atomic_dec(&mr->num_mw); + rxe_drop_ref(mr); + + mw->access = 0; + mw->addr = 0; + mw->length = 0; + mw->state = RXE_MW_STATE_FREE; +} + +int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + unsigned long flags; + struct rxe_mw *mw; + int ret; + + mw = rxe_pool_get_index(&rxe->mw_pool, rkey >> 8); + if (!mw) { + ret = -EINVAL; + goto err; + } + + if (rkey != mw->ibmw.rkey) { + ret = -EINVAL; + goto err_drop_ref; + } + + spin_lock_irqsave(&mw->lock, flags); + + ret = rxe_check_invalidate_mw(qp, mw); + if (ret) + goto err_unlock; + + rxe_do_invalidate_mw(mw); +err_unlock: + spin_unlock_irqrestore(&mw->lock, flags); +err_drop_ref: + rxe_drop_ref(mw); +err: + return ret; +} + void rxe_mw_cleanup(struct rxe_pool_entry *elem) { struct rxe_mw *mw = container_of(elem, typeof(*mw), pelem); diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 29fa41b4d2ac..52ddf2d1fe61 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -464,7 +464,7 @@ static int fill_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe, } else { err = copy_data(qp->pd, 0, &wqe->dma, payload_addr(pkt), paylen, - from_mr_obj, + RXE_FROM_MR_OBJ, &crc); if (err) return err; @@ -558,25 +558,25 @@ static void update_state(struct rxe_qp *qp, struct rxe_send_wqe *wqe, static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe) { u8 opcode = wqe->wr.opcode; - struct rxe_dev *rxe; struct rxe_mr *mr; - u32 rkey; int ret; + u32 rkey; switch (opcode) { case IB_WR_LOCAL_INV: - rxe = to_rdev(qp->ibqp.device); rkey = wqe->wr.ex.invalidate_rkey; - mr = rxe_pool_get_index(&rxe->mr_pool, rkey >> 8); - if (!mr) { - pr_err("No MR for rkey %#x\n", rkey); + if (rkey_is_mw(rkey)) + ret = rxe_invalidate_mw(qp, rkey); + else + ret = rxe_invalidate_mr(qp, rkey); + + if (ret) { wqe->state = wqe_state_error; wqe->status = IB_WC_LOC_QP_OP_ERR; - return -EINVAL; + return ret; } - mr->state = RXE_MR_STATE_FREE; - rxe_drop_ref(mr); break; + case IB_WR_REG_MR: mr = to_rmr(wqe->wr.wr.reg.mr); @@ -588,14 +588,16 @@ static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe) mr->iova = wqe->wr.wr.reg.mr->iova; rxe_drop_ref(mr); break; + case IB_WR_BIND_MW: ret = rxe_bind_mw(qp, wqe); if (ret) { wqe->state = wqe_state_error; wqe->status = IB_WC_MW_BIND_ERR; - return -EINVAL; + return ret; } break; + default: pr_err("Unexpected send wqe opcode %d\n", opcode); wqe->state = wqe_state_error; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 2b220659bddb..759e9789cd4d 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -35,6 +35,7 @@ enum resp_states { RESPST_ERR_TOO_MANY_RDMA_ATM_REQ, RESPST_ERR_RNR, RESPST_ERR_RKEY_VIOLATION, + RESPST_ERR_INVALIDATE_RKEY, RESPST_ERR_LENGTH, RESPST_ERR_CQ_OVERFLOW, RESPST_ERROR, @@ -68,6 +69,7 @@ static char *resp_state_name[] = { [RESPST_ERR_TOO_MANY_RDMA_ATM_REQ] = "ERR_TOO_MANY_RDMA_ATM_REQ", [RESPST_ERR_RNR] = "ERR_RNR", [RESPST_ERR_RKEY_VIOLATION] = "ERR_RKEY_VIOLATION", + [RESPST_ERR_INVALIDATE_RKEY] = "ERR_INVALIDATE_RKEY_VIOLATION", [RESPST_ERR_LENGTH] = "ERR_LENGTH", [RESPST_ERR_CQ_OVERFLOW] = "ERR_CQ_OVERFLOW", [RESPST_ERROR] = "ERROR", @@ -430,7 +432,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, resid = qp->resp.resid; pktlen = payload_size(pkt); - mr = lookup_mr(qp->pd, access, rkey, lookup_remote); + mr = lookup_mr(qp->pd, access, rkey, RXE_LOOKUP_REMOTE); if (!mr) { state = RESPST_ERR_RKEY_VIOLATION; goto err; @@ -484,7 +486,7 @@ static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, int err; err = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, - data_addr, data_len, to_mr_obj, NULL); + data_addr, data_len, RXE_TO_MR_OBJ, NULL); if (unlikely(err)) return (err == -ENOSPC) ? RESPST_ERR_LENGTH : RESPST_ERR_MALFORMED_WQE; @@ -500,7 +502,7 @@ static enum resp_states write_data_in(struct rxe_qp *qp, int data_len = payload_size(pkt); err = rxe_mr_copy(qp->resp.mr, qp->resp.va, payload_addr(pkt), data_len, - to_mr_obj, NULL); + RXE_TO_MR_OBJ, NULL); if (err) { rc = RESPST_ERR_RKEY_VIOLATION; goto out; @@ -701,7 +703,7 @@ static enum resp_states read_reply(struct rxe_qp *qp, return RESPST_ERR_RNR; err = rxe_mr_copy(res->read.mr, res->read.va, payload_addr(&ack_pkt), - payload, from_mr_obj, &icrc); + payload, RXE_FROM_MR_OBJ, &icrc); if (err) pr_err("Failed copying memory\n"); @@ -751,6 +753,14 @@ static void build_rdma_network_hdr(union rdma_network_hdr *hdr, memcpy(&hdr->ibgrh, ipv6_hdr(skb), sizeof(hdr->ibgrh)); } +static int invalidate_rkey(struct rxe_qp *qp, u32 rkey) +{ + if (rkey_is_mw(rkey)) + return rxe_invalidate_mw(qp, rkey); + else + return rxe_invalidate_mr(qp, rkey); +} + /* Executes a new request. A retried request never reach that function (send * and writes are discarded, and reads and atomics are retried elsewhere. */ @@ -790,6 +800,14 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt) WARN_ON_ONCE(1); } + if (pkt->mask & RXE_IETH_MASK) { + u32 rkey = ieth_rkey(pkt); + + err = invalidate_rkey(qp, rkey); + if (err) + return RESPST_ERR_INVALIDATE_RKEY; + } + /* next expected psn, read handles this separately */ qp->resp.psn = (pkt->psn + 1) & BTH_PSN_MASK; qp->resp.ack_psn = qp->resp.psn; @@ -822,13 +840,13 @@ static enum resp_states do_complete(struct rxe_qp *qp, memset(&cqe, 0, sizeof(cqe)); if (qp->rcq->is_user) { - uwc->status = qp->resp.status; - uwc->qp_num = qp->ibqp.qp_num; - uwc->wr_id = wqe->wr_id; + uwc->status = qp->resp.status; + uwc->qp_num = qp->ibqp.qp_num; + uwc->wr_id = wqe->wr_id; } else { - wc->status = qp->resp.status; - wc->qp = &qp->ibqp; - wc->wr_id = wqe->wr_id; + wc->status = qp->resp.status; + wc->qp = &qp->ibqp; + wc->wr_id = wqe->wr_id; } if (wc->status == IB_WC_SUCCESS) { @@ -883,27 +901,14 @@ static enum resp_states do_complete(struct rxe_qp *qp, } if (pkt->mask & RXE_IETH_MASK) { - struct rxe_mr *rmr; - wc->wc_flags |= IB_WC_WITH_INVALIDATE; wc->ex.invalidate_rkey = ieth_rkey(pkt); - - rmr = rxe_pool_get_index(&rxe->mr_pool, - wc->ex.invalidate_rkey >> 8); - if (unlikely(!rmr)) { - pr_err("Bad rkey %#x invalidation\n", - wc->ex.invalidate_rkey); - return RESPST_ERROR; - } - rmr->state = RXE_MR_STATE_FREE; - rxe_drop_ref(rmr); } - wc->qp = &qp->ibqp; - if (pkt->mask & RXE_DETH_MASK) wc->src_qp = deth_sqp(pkt); + wc->qp = &qp->ibqp; wc->port_num = qp->attr.port_num; } } @@ -1314,6 +1319,13 @@ int rxe_responder(void *arg) } break; + case RESPST_ERR_INVALIDATE_RKEY: + /* RC - Class J. */ + qp->resp.goto_error = 1; + qp->resp.status = IB_WC_REM_INV_REQ_ERR; + state = RESPST_COMPLETE; + break; + case RESPST_ERR_LENGTH: if (qp_type(qp) == IB_QPT_RC) { /* Class C */ diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 7da47b8c707b..74fcd871757d 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -275,6 +275,16 @@ enum rxe_mr_type { RXE_MR_TYPE_MR, }; +enum rxe_mr_copy_dir { + RXE_TO_MR_OBJ, + RXE_FROM_MR_OBJ, +}; + +enum rxe_mr_lookup_type { + RXE_LOOKUP_LOCAL, + RXE_LOOKUP_REMOTE, +}; + #define RXE_BUF_PER_MAP (PAGE_SIZE / sizeof(struct rxe_phys_buf)) struct rxe_phys_buf { @@ -286,6 +296,13 @@ struct rxe_map { struct rxe_phys_buf buf[RXE_BUF_PER_MAP]; }; +static inline int rkey_is_mw(u32 rkey) +{ + u32 index = rkey >> 8; + + return (index >= RXE_MIN_MW_INDEX) && (index <= RXE_MAX_MW_INDEX); +} + struct rxe_mr { struct rxe_pool_entry pelem; struct ib_mr ibmr; @@ -311,23 +328,23 @@ struct rxe_mr { u32 max_buf; u32 num_map; - struct rxe_map **map; - atomic_t num_mw; + + struct rxe_map **map; }; enum rxe_mw_state { - RXE_MW_STATE_INVALID = RXE_MR_STATE_INVALID, - RXE_MW_STATE_FREE = RXE_MR_STATE_FREE, - RXE_MW_STATE_VALID = RXE_MR_STATE_VALID, + RXE_MW_STATE_INVALID = RXE_MR_STATE_INVALID, + RXE_MW_STATE_FREE = RXE_MR_STATE_FREE, + RXE_MW_STATE_VALID = RXE_MR_STATE_VALID, }; struct rxe_mw { - struct ib_mw ibmw; - struct rxe_pool_entry pelem; + struct ib_mw ibmw; + struct rxe_pool_entry pelem; spinlock_t lock; enum rxe_mw_state state; - struct rxe_qp *qp; /* Type 2 only */ + struct rxe_qp *qp; /* Type 2 only */ struct rxe_mr *mr; int access; u64 addr; From patchwork Fri May 21 20:18:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12273711 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65039C04FF3 for ; Fri, 21 May 2021 20:19:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4453861164 for ; Fri, 21 May 2021 20:19:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230297AbhEUUUq (ORCPT ); Fri, 21 May 2021 16:20:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230341AbhEUUUp (ORCPT ); Fri, 21 May 2021 16:20:45 -0400 Received: from mail-oo1-xc35.google.com (mail-oo1-xc35.google.com [IPv6:2607:f8b0:4864:20::c35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AAA3CC0613CE for ; Fri, 21 May 2021 13:19:20 -0700 (PDT) Received: by mail-oo1-xc35.google.com with SMTP id s24-20020a4aead80000b02901fec6deb28aso4848585ooh.11 for ; Fri, 21 May 2021 13:19:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cMvBAEQuMBFtNREjn+AvA1qtn5/GXMySLD9mvP59vxI=; b=OcwJc1PXCRguDhUNgFK4ibyI2awq5BKM05h1PwlyfyObscJ5qZiSZG3BlPmdnfiqww iU3GsFIMSLWeI+XD4GWg1rGTLG6FOZAq/ukB74ZeAjvbka/z60QtY+StwrKxGtf106dp prmcKP4DZP/FygHoA/fhndn6cEoVMoUAfDsFUC851cGIe0sG3u6SyyuYrKq5XbRRvi5g aDgF7T7qkJk/cA4BLsJCZWKTTauxTSkimWTeuHZNlxqI2Qv+vJceIfzmbmCJqdy161Qy 6ydAqKjNw6acEmTYH4PgO4kf+Sy6iKzCBMl1GVupxg+CiwXPn7H0bHHeFn4Vk1HjlsvT L9Bw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cMvBAEQuMBFtNREjn+AvA1qtn5/GXMySLD9mvP59vxI=; b=sgFlj3Za7LJMAxvQBzxoJ+Dl7h5ThsCl83SkU27+zaPH37C/pXkUeSOsTPubrkbplg Pq5LQoBQw7/lugGhyn4suegJKIl6NeqU9GmdGQQExhTwqBhsGRCNEjq+oBKBMnbNQbW8 73TP0UECiwJVc/rG2Naaw8+p2mXqqQKHA/jH5UvkmJbDQb5HCMIn/axjufryRFBbRqwy iz2IuMkrFVxEsL88SFNQ6tsSMGGRPCCcU2Af5qkl26rKpIFZaA1LHXlbEdrDOEbDiebH Mr/2apsuCMDo6ff6rHH8aBvCPms0IqiArM7PFhpoN3jmB3vBKph7RXK+aTMICAORH7Lj 9Q0Q== X-Gm-Message-State: AOAM532lCDXQggnNkkNHoqiY9W/E1kkjt5wvR9GRUDZl4isAQx00y1tp ZeAcVDcL89xEqO6R0lnLlpQbaE73QH2L6g== X-Google-Smtp-Source: ABdhPJyR6HZ4Qs28GW5G047skyyjTYe6fTOg/UxvvspcU3UpIZJ6M7QAp9CMalKMtaM1JtXLCqFHuQ== X-Received: by 2002:a4a:8241:: with SMTP id t1mr9462216oog.17.1621628359959; Fri, 21 May 2021 13:19:19 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-7300-72eb-72bd-e6db.res6.spectrum.com. [2603:8081:140c:1a00:7300:72eb:72bd:e6db]) by smtp.gmail.com with ESMTPSA id f9sm1483822otq.27.2021.05.21.13.19.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 May 2021 13:19:19 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v7 09/10] RDMA/rxe: Implement memory access through MWs Date: Fri, 21 May 2021 15:18:24 -0500 Message-Id: <20210521201824.659565-10-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210521201824.659565-1-rpearsonhpe@gmail.com> References: <20210521201824.659565-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add code to implement memory access through memory windows. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 1 + drivers/infiniband/sw/rxe/rxe_mw.c | 23 +++++++++++ drivers/infiniband/sw/rxe/rxe_resp.c | 55 +++++++++++++++++++-------- drivers/infiniband/sw/rxe/rxe_verbs.h | 11 ++++++ 4 files changed, 75 insertions(+), 15 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 8a4633aeb2a8..6e4b5e22541e 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -94,6 +94,7 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata); int rxe_dealloc_mw(struct ib_mw *ibmw); int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe); int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey); +struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey); void rxe_mw_cleanup(struct rxe_pool_entry *arg); /* rxe_net.c */ diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 594f8cef0a08..5ba77df7598e 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -312,6 +312,29 @@ int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey) return ret; } +struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + struct rxe_pd *pd = to_rpd(qp->ibqp.pd); + struct rxe_mw *mw; + int index = rkey >> 8; + + mw = rxe_pool_get_index(&rxe->mw_pool, index); + if (!mw) + return NULL; + + if (unlikely((rxe_mw_rkey(mw) != rkey) || rxe_mw_pd(mw) != pd || + (mw->ibmw.type == IB_MW_TYPE_2 && mw->qp != qp) || + (mw->length == 0) || + (access && !(access & mw->access)) || + mw->state != RXE_MW_STATE_VALID)) { + rxe_drop_ref(mw); + return NULL; + } + + return mw; +} + void rxe_mw_cleanup(struct rxe_pool_entry *elem) { struct rxe_mw *mw = container_of(elem, typeof(*mw), pelem); diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 759e9789cd4d..1ea576d42882 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -394,6 +394,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { struct rxe_mr *mr = NULL; + struct rxe_mw *mw = NULL; u64 va; u32 rkey; u32 resid; @@ -405,6 +406,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, if (pkt->mask & (RXE_READ_MASK | RXE_WRITE_MASK)) { if (pkt->mask & RXE_RETH_MASK) { qp->resp.va = reth_va(pkt); + qp->resp.offset = 0; qp->resp.rkey = reth_rkey(pkt); qp->resp.resid = reth_len(pkt); qp->resp.length = reth_len(pkt); @@ -413,6 +415,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, : IB_ACCESS_REMOTE_WRITE; } else if (pkt->mask & RXE_ATOMIC_MASK) { qp->resp.va = atmeth_va(pkt); + qp->resp.offset = 0; qp->resp.rkey = atmeth_rkey(pkt); qp->resp.resid = sizeof(u64); access = IB_ACCESS_REMOTE_ATOMIC; @@ -432,18 +435,36 @@ static enum resp_states check_rkey(struct rxe_qp *qp, resid = qp->resp.resid; pktlen = payload_size(pkt); - mr = lookup_mr(qp->pd, access, rkey, RXE_LOOKUP_REMOTE); - if (!mr) { - state = RESPST_ERR_RKEY_VIOLATION; - goto err; - } + if (rkey_is_mw(rkey)) { + mw = rxe_lookup_mw(qp, access, rkey); + if (!mw) { + pr_err("%s: no MW matches rkey %#x\n", __func__, rkey); + state = RESPST_ERR_RKEY_VIOLATION; + goto err; + } - if (unlikely(mr->state == RXE_MR_STATE_FREE)) { - state = RESPST_ERR_RKEY_VIOLATION; - goto err; + mr = mw->mr; + if (!mr) { + pr_err("%s: MW doesn't have an MR\n", __func__); + state = RESPST_ERR_RKEY_VIOLATION; + goto err; + } + + if (mw->access & IB_ZERO_BASED) + qp->resp.offset = mw->addr; + + rxe_drop_ref(mw); + rxe_add_ref(mr); + } else { + mr = lookup_mr(qp->pd, access, rkey, RXE_LOOKUP_REMOTE); + if (!mr) { + pr_err("%s: no MR matches rkey %#x\n", __func__, rkey); + state = RESPST_ERR_RKEY_VIOLATION; + goto err; + } } - if (mr_check_range(mr, va, resid)) { + if (mr_check_range(mr, va + qp->resp.offset, resid)) { state = RESPST_ERR_RKEY_VIOLATION; goto err; } @@ -477,6 +498,9 @@ static enum resp_states check_rkey(struct rxe_qp *qp, err: if (mr) rxe_drop_ref(mr); + if (mw) + rxe_drop_ref(mw); + return state; } @@ -501,8 +525,8 @@ static enum resp_states write_data_in(struct rxe_qp *qp, int err; int data_len = payload_size(pkt); - err = rxe_mr_copy(qp->resp.mr, qp->resp.va, payload_addr(pkt), data_len, - RXE_TO_MR_OBJ, NULL); + err = rxe_mr_copy(qp->resp.mr, qp->resp.va + qp->resp.offset, + payload_addr(pkt), data_len, RXE_TO_MR_OBJ, NULL); if (err) { rc = RESPST_ERR_RKEY_VIOLATION; goto out; @@ -521,7 +545,6 @@ static DEFINE_SPINLOCK(atomic_ops_lock); static enum resp_states process_atomic(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { - u64 iova = atmeth_va(pkt); u64 *vaddr; enum resp_states ret; struct rxe_mr *mr = qp->resp.mr; @@ -531,7 +554,7 @@ static enum resp_states process_atomic(struct rxe_qp *qp, goto out; } - vaddr = iova_to_vaddr(mr, iova, sizeof(u64)); + vaddr = iova_to_vaddr(mr, qp->resp.va + qp->resp.offset, sizeof(u64)); /* check vaddr is 8 bytes aligned. */ if (!vaddr || (uintptr_t)vaddr & 7) { @@ -655,8 +678,10 @@ static enum resp_states read_reply(struct rxe_qp *qp, res->type = RXE_READ_MASK; res->replay = 0; - res->read.va = qp->resp.va; - res->read.va_org = qp->resp.va; + res->read.va = qp->resp.va + + qp->resp.offset; + res->read.va_org = qp->resp.va + + qp->resp.offset; res->first_psn = req_pkt->psn; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 74fcd871757d..cf8cae64f7df 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -183,6 +183,7 @@ struct rxe_resp_info { /* RDMA read / atomic only */ u64 va; + u64 offset; struct rxe_mr *mr; u32 resid; u32 rkey; @@ -480,6 +481,16 @@ static inline u32 mr_rkey(struct rxe_mr *mr) return mr->ibmr.rkey; } +static inline struct rxe_pd *rxe_mw_pd(struct rxe_mw *mw) +{ + return to_rpd(mw->ibmw.pd); +} + +static inline u32 rxe_mw_rkey(struct rxe_mw *mw) +{ + return mw->ibmw.rkey; +} + int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name); void rxe_mc_cleanup(struct rxe_pool_entry *arg); From patchwork Fri May 21 20:18:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12273707 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FAB0C4707A for ; Fri, 21 May 2021 20:19:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E6C2D613E1 for ; Fri, 21 May 2021 20:19:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230392AbhEUUUp (ORCPT ); Fri, 21 May 2021 16:20:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230354AbhEUUUo (ORCPT ); Fri, 21 May 2021 16:20:44 -0400 Received: from mail-oi1-x230.google.com (mail-oi1-x230.google.com [IPv6:2607:f8b0:4864:20::230]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 56E45C06138D for ; Fri, 21 May 2021 13:19:21 -0700 (PDT) Received: by mail-oi1-x230.google.com with SMTP id c196so12602201oib.9 for ; Fri, 21 May 2021 13:19:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cE7E6256nwQASA75N374ABr0fpgsDymtSFvOShfQN9I=; b=qbcUpUi6CHsbe770Wf9mCGAK2u95HAHCnjvsPGvgQYDR9oZjBWIjRZUHnVp/1HZVlR 49APDRcBvnmKcfKSKW5gPAr6+MOX7HdS1gdc1LVp5CcnyfsSuGYTocoBsjqPf4Uirf9/ 6ncclKMRvhRxHA/SfBWQV2VEIIeiI+oeF5TiMG+Ncqeqix6tJxwEgTzsmB8vKXgasBoZ bEgcSQ4HuucgjV7kTLt+meLZi3cupcAmv6tbINNoPRXsWnymW/qzN2GIv3Esmg4ZBSvv +jd79zytQv/46whrTie8F8qx8PPqABYmuVFvTCc1/9NojCKJkuey2fQuFtrnsTjUTO9t MVPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cE7E6256nwQASA75N374ABr0fpgsDymtSFvOShfQN9I=; b=NlNW//9+FrF1X+89X/+nXXqggwv0SeCi4eMIOZC2lY/GdAL57BgMGSBoGuTX+qSKSy VgCyxFte3dWq6rCqEN2l4qAAWsBx/WyId8LWS7ULg0HJqgTnvfmuB3Mu25hCoMvt3PiN zOO6mmt2kKfDhvappHNmA7+bq7HNuMJuiDRjTMmAOd0Brj2Vg3x7gmefFBqqfUcGIsDZ YnWmemEzOnGP6OFJzJ9wFgA8qh1PWYobZJvwqtFo094x4nb6Ie9+Q7hZhU8EJGuKT1f0 ZZ6CXJX65mpI/vYNBHqKfwkrYUc4QY3YGXDblsKEWXdVAwPviwtZU2TIxZuwoqSp1yji 7iVg== X-Gm-Message-State: AOAM530FOZ2XdcVISq023CK3vQp8Gh5D+9wGVVGL1dXPxi+E+MKr2uGb 5jvLJQ7Kq2Cv3jsGMZ8H7Wc= X-Google-Smtp-Source: ABdhPJwJOZNfvTWU5q93l7CrHfYDm/LVKLXocpQaZfuM4fyw53IDw+Bp6PhxedQazaxwyocMMfUXFg== X-Received: by 2002:aca:c085:: with SMTP id q127mr3283457oif.7.1621628360797; Fri, 21 May 2021 13:19:20 -0700 (PDT) Received: from localhost (2603-8081-140c-1a00-7300-72eb-72bd-e6db.res6.spectrum.com. [2603:8081:140c:1a00:7300:72eb:72bd:e6db]) by smtp.gmail.com with ESMTPSA id e21sm1348412oie.32.2021.05.21.13.19.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 May 2021 13:19:20 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v7 10/10] RDMA/rxe: Disallow MR dereg and invalidate when bound Date: Fri, 21 May 2021 15:18:25 -0500 Message-Id: <20210521201824.659565-11-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210521201824.659565-1-rpearsonhpe@gmail.com> References: <20210521201824.659565-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Check that an MR has no bound MWs before allowing a dereg or invalidate operation. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 1 + drivers/infiniband/sw/rxe/rxe_mr.c | 25 +++++++++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_verbs.c | 11 ----------- 3 files changed, 26 insertions(+), 11 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 6e4b5e22541e..1ddb20855dee 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -87,6 +87,7 @@ struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length); int advance_dma_data(struct rxe_dma_info *dma, unsigned int length); int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey); +int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata); void rxe_mr_cleanup(struct rxe_pool_entry *arg); /* rxe_mw.c */ diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 3fb58d2c7814..7f169329a8bf 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -546,6 +546,13 @@ int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey) goto err_drop_ref; } + if (atomic_read(&mr->num_mw) > 0) { + pr_warn("%s: Attempt to invalidate an MR while bound to MWs\n", + __func__); + ret = -EINVAL; + goto err_drop_ref; + } + mr->state = RXE_MR_STATE_FREE; ret = 0; @@ -555,6 +562,24 @@ int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey) return ret; } +int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) +{ + struct rxe_mr *mr = to_rmr(ibmr); + + if (atomic_read(&mr->num_mw) > 0) { + pr_warn("%s: Attempt to deregister an MR while bound to MWs\n", + __func__); + return -EINVAL; + } + + mr->state = RXE_MR_STATE_ZOMBIE; + rxe_drop_ref(mr_pd(mr)); + rxe_drop_index(mr); + rxe_drop_ref(mr); + + return 0; +} + void rxe_mr_cleanup(struct rxe_pool_entry *arg) { struct rxe_mr *mr = container_of(arg, typeof(*mr), pelem); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 4860e8ab378e..3e0bab4994d1 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -913,17 +913,6 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, return ERR_PTR(err); } -static int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) -{ - struct rxe_mr *mr = to_rmr(ibmr); - - mr->state = RXE_MR_STATE_ZOMBIE; - rxe_drop_ref(mr_pd(mr)); - rxe_drop_index(mr); - rxe_drop_ref(mr); - return 0; -} - static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, u32 max_num_sg) {