From patchwork Fri Mar 18 01:55:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12784760 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74E71C4332F for ; Fri, 18 Mar 2022 01:55:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231348AbiCRB44 (ORCPT ); Thu, 17 Mar 2022 21:56:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47110 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231449AbiCRB44 (ORCPT ); Thu, 17 Mar 2022 21:56:56 -0400 Received: from mail-oi1-x236.google.com (mail-oi1-x236.google.com [IPv6:2607:f8b0:4864:20::236]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 661D1232D15 for ; Thu, 17 Mar 2022 18:55:30 -0700 (PDT) Received: by mail-oi1-x236.google.com with SMTP id b188so7493266oia.13 for ; Thu, 17 Mar 2022 18:55:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rExX1OJrPQYVE2X/63iWoaDZ4BIl+puEpR+q6gPwOx4=; b=c52FCH3FLsa+HooJvsurW1KFbxq0KhotlknyFUQjs2fjlEEy99epwqOFD/5JePf5Oz 2GCucTjkht9EbaM0wdS2X9AB/zcF5lbwDQIGLVPfeDQy0lnCWc52xDo6P1Dnd75BOV/g UWA+VqmtSL6eMJ/b2Ul29XsVz/5te2A5YBpertfaT9vlKA839EHGFRLyYcxr7SYjFl8Q YiiKgJm2bcaBzE18176+fTUx0KLGgFNI28QhNesUBsVP2UOFFWdHbp97IggyP4CPQYRL hqqzzEXHUckugWx0aq4vQtE57CtmJPCA+u5knyZqO7MwzJpiZ2amRdmbLdEf1J4tos4Q KbTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rExX1OJrPQYVE2X/63iWoaDZ4BIl+puEpR+q6gPwOx4=; b=1+D0cN6wRSqF4R6DHtCaT7FkAiXPzIK5Wt+R34Q1VkFug+fcQGPpuVlqiCOAzZYHTR CQoJpq6tv7P/Jkq9eXPkas4pnAoUKfdC2nmG/XEw4xbCYJsw0hG72kLpM/pGn20S8Zm+ GWmNJ9/kki/voscgpmnmQ35hoUfChAULm8DtEuhF5YSDhm/x7E77nFRYfvubTeUmIGY+ ash+TkCz0kxwcqw01uQ8Tmr96pBBZ9V8OysMMl9VObe+jlOkKyCfjSLgSXW8sY2/vaWU sOP+hYtDY5rzEVIAwiZAMnCgaphbiNBMtkkqdza7I2++bqKMqUx+18KTiBN4zB9RzKO7 nt8g== X-Gm-Message-State: AOAM530T9c4rjolRFnRrvx5q1JLygZEae8sYLfgi/WCUxSJ0+JqfvpCI e3L8e8it/e47KO20cTP8/SKjVfQfyZQ= X-Google-Smtp-Source: ABdhPJx80n5I/SiSEHTGp1B2OzZTNWWlZ66oN4cpQaeR08TTxwiflj1gmMm+TceL5l4/8BgRLzQEnA== X-Received: by 2002:aca:180c:0:b0:2ec:94a4:f0a2 with SMTP id h12-20020aca180c000000b002ec94a4f0a2mr7317506oih.211.1647568529717; Thu, 17 Mar 2022 18:55:29 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-257e-2cb6-0a79-8c62.res6.spectrum.com. [2603:8081:140c:1a00:257e:2cb6:a79:8c62]) by smtp.googlemail.com with ESMTPSA id a32-20020a056870a1a000b000d458b1469dsm3292878oaf.10.2022.03.17.18.55.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Mar 2022 18:55:29 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v12 01/11] RDMA/rxe: Replace #define by enum Date: Thu, 17 Mar 2022 20:55:04 -0500 Message-Id: <20220318015514.231621-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220318015514.231621-1-rpearsonhpe@gmail.com> References: <20220318015514.231621-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently the #define IB_SRQ_INIT_MASK is used to distinguish the rxe_create_srq verb from the rxe_modify_srq verb so that some code can be shared between these two subroutines. This commit replaces the #define with an enum which extends enum ib_srq_attr_mask and makes related changes to prototypes to clean up type warnings. The parameter is given a rxe specific name. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 8 ++------ drivers/infiniband/sw/rxe/rxe_srq.c | 10 +++++----- drivers/infiniband/sw/rxe/rxe_verbs.c | 7 ++++--- drivers/infiniband/sw/rxe/rxe_verbs.h | 6 ++++++ 4 files changed, 17 insertions(+), 14 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 2ffbe3390668..9067d3b6f1ee 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -159,17 +159,13 @@ void retransmit_timer(struct timer_list *t); void rnr_nak_timer(struct timer_list *t); /* rxe_srq.c */ -#define IB_SRQ_INIT_MASK (~IB_SRQ_LIMIT) - int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, - struct ib_srq_attr *attr, enum ib_srq_attr_mask mask); - + struct ib_srq_attr *attr, enum rxe_srq_attr_mask mask); int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_init_attr *init, struct ib_udata *udata, struct rxe_create_srq_resp __user *uresp); - int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, - struct ib_srq_attr *attr, enum ib_srq_attr_mask mask, + struct ib_srq_attr *attr, enum rxe_srq_attr_mask mask, struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata); void rxe_dealloc(struct ib_device *ib_dev); diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c index 0c0721f04357..862aa749c93a 100644 --- a/drivers/infiniband/sw/rxe/rxe_srq.c +++ b/drivers/infiniband/sw/rxe/rxe_srq.c @@ -10,14 +10,14 @@ #include "rxe_queue.h" int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, - struct ib_srq_attr *attr, enum ib_srq_attr_mask mask) + struct ib_srq_attr *attr, enum rxe_srq_attr_mask mask) { if (srq && srq->error) { pr_warn("srq in error state\n"); goto err1; } - if (mask & IB_SRQ_MAX_WR) { + if (mask & RXE_SRQ_MAX_WR) { if (attr->max_wr > rxe->attr.max_srq_wr) { pr_warn("max_wr(%d) > max_srq_wr(%d)\n", attr->max_wr, rxe->attr.max_srq_wr); @@ -39,7 +39,7 @@ int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, attr->max_wr = RXE_MIN_SRQ_WR; } - if (mask & IB_SRQ_LIMIT) { + if (mask & RXE_SRQ_LIMIT) { if (attr->srq_limit > rxe->attr.max_srq_wr) { pr_warn("srq_limit(%d) > max_srq_wr(%d)\n", attr->srq_limit, rxe->attr.max_srq_wr); @@ -54,7 +54,7 @@ int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, } } - if (mask == IB_SRQ_INIT_MASK) { + if (mask == RXE_SRQ_INIT) { if (attr->max_sge > rxe->attr.max_srq_sge) { pr_warn("max_sge(%d) > max_srq_sge(%d)\n", attr->max_sge, rxe->attr.max_srq_sge); @@ -122,7 +122,7 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, } int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, - struct ib_srq_attr *attr, enum ib_srq_attr_mask mask, + struct ib_srq_attr *attr, enum rxe_srq_attr_mask mask, struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata) { int err; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 67184b0281a0..5609956d2bc3 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -7,8 +7,8 @@ #include #include #include + #include "rxe.h" -#include "rxe_loc.h" #include "rxe_queue.h" #include "rxe_hw_counters.h" @@ -295,7 +295,7 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, uresp = udata->outbuf; } - err = rxe_srq_chk_attr(rxe, NULL, &init->attr, IB_SRQ_INIT_MASK); + err = rxe_srq_chk_attr(rxe, NULL, &init->attr, RXE_SRQ_INIT); if (err) goto err1; @@ -320,13 +320,14 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, } static int rxe_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr, - enum ib_srq_attr_mask mask, + enum ib_srq_attr_mask ibmask, struct ib_udata *udata) { int err; struct rxe_srq *srq = to_rsrq(ibsrq); struct rxe_dev *rxe = to_rdev(ibsrq->device); struct rxe_modify_srq_cmd ucmd = {}; + enum rxe_srq_attr_mask mask = (enum rxe_srq_attr_mask)ibmask; if (udata) { if (udata->inlen < sizeof(ucmd)) diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index e7eff1ca75e9..34aa013c7801 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -93,6 +93,12 @@ struct rxe_rq { struct rxe_queue *queue; }; +enum rxe_srq_attr_mask { + RXE_SRQ_MAX_WR = IB_SRQ_MAX_WR, + RXE_SRQ_LIMIT = IB_SRQ_LIMIT, + RXE_SRQ_INIT = BIT(2), +}; + struct rxe_srq { struct ib_srq ibsrq; struct rxe_pool_elem elem; From patchwork Fri Mar 18 01:55:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12784758 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01B17C433F5 for ; Fri, 18 Mar 2022 01:55:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229831AbiCRB44 (ORCPT ); Thu, 17 Mar 2022 21:56:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231585AbiCRB44 (ORCPT ); Thu, 17 Mar 2022 21:56:56 -0400 Received: from mail-oi1-x235.google.com (mail-oi1-x235.google.com [IPv6:2607:f8b0:4864:20::235]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0800D2414D7 for ; Thu, 17 Mar 2022 18:55:31 -0700 (PDT) Received: by mail-oi1-x235.google.com with SMTP id q189so7503970oia.9 for ; Thu, 17 Mar 2022 18:55:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XI+4+Dmeo+mKDgNgvWOH65/+Nxhaxdl8wKivdGzzlRU=; b=JArd+2eKKEB8oBvkXvt2QqkjrlC8WPc0mbHRWlZueW9zb3D7JTBcIhZkZAcg19dN63 F+x+mxQKNdU5DW9PUXNME5wk0uA5Lh1lFHNBbixYtzWhFC0I+RAf4T9m67aQZmo64B3J kH+k27ICCfeefpaY7wKRoLucp1I1gAyscLTfbRj1S5paZsKvEc5MVHe6/gwFJWEW8W4u rjbk9fAazTh2+3j5Q3FnrYnbHUSubIt+Wk6r5kXGzfNegEJhM/bcutIP32FjoSTJIVqc IfZZ+OG+k2Ve1BUu2Xws2rjrDHUfWL0xoH2nXeANUZFQvnlNjbZXfvZ0HqElViKIT6jD mt6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XI+4+Dmeo+mKDgNgvWOH65/+Nxhaxdl8wKivdGzzlRU=; b=ndHBxEE52pvKfwks6hrDIWy5gzmCgKM358wNc0ft+MasyHbmNVLWAx/Tr9TIxFkSSu BRq7uP9IQL1wy6YzaJYLt/tiPdk92KVAXosygZysA0GuQzSvDzo6Y0D+QsTq+Cd3qS3Z HtApHF3VnCANmuhM8CFANyT6EfTnr7zx2MzgTx/Jt1BcBVRD4QoAzP66A5QnRYDuZHYM Wa1nthje6IR0M5X6HAPeLUdNrq6K/GV/7kxJymIKa/74efX6tsWtqmbXXErZ9tZJCS7C YN9jfrcVo0juhjNSWDbQRU7sxSvldkCi4Cv2XavB0hMa625aDL0mIFiizy57X0zBDn28 SsyQ== X-Gm-Message-State: AOAM533O2OgaeLztRXGh2tRzCUAXdyDHWnYHs4PzE4lP0IIMldbD7noD +yOUY5SqIbGB66zY+14EBwOtfINPwpM= X-Google-Smtp-Source: ABdhPJxwy4tXxkr6i3sbZjzSc+6Rx13qutO3uq6DKVgYTPsjWNdTscXYgWhPYq65buCmrMsYdABl0A== X-Received: by 2002:aca:bed7:0:b0:2ef:2225:6b6 with SMTP id o206-20020acabed7000000b002ef222506b6mr1457468oif.174.1647568530344; Thu, 17 Mar 2022 18:55:30 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-257e-2cb6-0a79-8c62.res6.spectrum.com. [2603:8081:140c:1a00:257e:2cb6:a79:8c62]) by smtp.googlemail.com with ESMTPSA id a32-20020a056870a1a000b000d458b1469dsm3292878oaf.10.2022.03.17.18.55.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Mar 2022 18:55:30 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v12 02/11] RDMA/rxe: Add rxe_srq_cleanup() Date: Thu, 17 Mar 2022 20:55:05 -0500 Message-Id: <20220318015514.231621-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220318015514.231621-1-rpearsonhpe@gmail.com> References: <20220318015514.231621-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move cleanup code from rxe_destroy_srq() to rxe_srq_cleanup() which is called after all references are dropped to allow code depending on the srq object to complete. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 7 ++++--- drivers/infiniband/sw/rxe/rxe_pool.c | 1 + drivers/infiniband/sw/rxe/rxe_srq.c | 11 +++++++++++ drivers/infiniband/sw/rxe/rxe_verbs.c | 27 +++++++++++---------------- 4 files changed, 27 insertions(+), 19 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 9067d3b6f1ee..300c702f432a 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -37,7 +37,7 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited); void rxe_cq_disable(struct rxe_cq *cq); -void rxe_cq_cleanup(struct rxe_pool_elem *arg); +void rxe_cq_cleanup(struct rxe_pool_elem *elem); /* rxe_mcast.c */ struct rxe_mcg *rxe_lookup_mcg(struct rxe_dev *rxe, union ib_gid *mgid); @@ -81,7 +81,7 @@ int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey); int rxe_reg_fast_mr(struct rxe_qp *qp, struct rxe_send_wqe *wqe); int rxe_mr_set_page(struct ib_mr *ibmr, u64 addr); int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata); -void rxe_mr_cleanup(struct rxe_pool_elem *arg); +void rxe_mr_cleanup(struct rxe_pool_elem *elem); /* rxe_mw.c */ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata); @@ -89,7 +89,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw); int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe); int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey); struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey); -void rxe_mw_cleanup(struct rxe_pool_elem *arg); +void rxe_mw_cleanup(struct rxe_pool_elem *elem); /* rxe_net.c */ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, @@ -167,6 +167,7 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_attr *attr, enum rxe_srq_attr_mask mask, struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata); +void rxe_srq_cleanup(struct rxe_pool_elem *elem); void rxe_dealloc(struct ib_device *ib_dev); diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 87066d04ed18..5963b1429ad8 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -46,6 +46,7 @@ static const struct rxe_type_info { .name = "srq", .size = sizeof(struct rxe_srq), .elem_offset = offsetof(struct rxe_srq, elem), + .cleanup = rxe_srq_cleanup, .min_index = RXE_MIN_SRQ_INDEX, .max_index = RXE_MAX_SRQ_INDEX, .max_elem = RXE_MAX_SRQ_INDEX - RXE_MIN_SRQ_INDEX + 1, diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c index 862aa749c93a..26e7ac35733e 100644 --- a/drivers/infiniband/sw/rxe/rxe_srq.c +++ b/drivers/infiniband/sw/rxe/rxe_srq.c @@ -154,3 +154,14 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, srq->rq.queue = NULL; return err; } + +void rxe_srq_cleanup(struct rxe_pool_elem *elem) +{ + struct rxe_srq *srq = container_of(elem, typeof(*srq), elem); + + if (srq->pd) + rxe_put(srq->pd); + + if (srq->rq.queue) + rxe_queue_cleanup(srq->rq.queue); +} diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 5609956d2bc3..89f4f30f7247 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -286,36 +286,35 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, struct rxe_srq *srq = to_rsrq(ibsrq); struct rxe_create_srq_resp __user *uresp = NULL; - if (init->srq_type != IB_SRQT_BASIC) - return -EOPNOTSUPP; - if (udata) { if (udata->outlen < sizeof(*uresp)) return -EINVAL; uresp = udata->outbuf; } + if (init->srq_type != IB_SRQT_BASIC) + return -EOPNOTSUPP; + err = rxe_srq_chk_attr(rxe, NULL, &init->attr, RXE_SRQ_INIT); if (err) - goto err1; + goto err_out; err = rxe_add_to_pool(&rxe->srq_pool, srq); if (err) - goto err1; + goto err_out; rxe_get(pd); srq->pd = pd; err = rxe_srq_from_init(rxe, srq, init, udata, uresp); if (err) - goto err2; + goto err_cleanup; return 0; -err2: - rxe_put(pd); +err_cleanup: rxe_put(srq); -err1: +err_out: return err; } @@ -340,15 +339,15 @@ static int rxe_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr, err = rxe_srq_chk_attr(rxe, srq, attr, mask); if (err) - goto err1; + goto err_out; err = rxe_srq_from_attr(rxe, srq, attr, mask, &ucmd, udata); if (err) - goto err1; + goto err_out; return 0; -err1: +err_out: return err; } @@ -369,10 +368,6 @@ static int rxe_destroy_srq(struct ib_srq *ibsrq, struct ib_udata *udata) { struct rxe_srq *srq = to_rsrq(ibsrq); - if (srq->rq.queue) - rxe_queue_cleanup(srq->rq.queue); - - rxe_put(srq->pd); rxe_put(srq); return 0; } From patchwork Fri Mar 18 01:55:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12784762 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AC07C433EF for ; Fri, 18 Mar 2022 01:55:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231673AbiCRB5A (ORCPT ); Thu, 17 Mar 2022 21:57:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231622AbiCRB47 (ORCPT ); Thu, 17 Mar 2022 21:56:59 -0400 Received: from mail-oi1-x232.google.com (mail-oi1-x232.google.com [IPv6:2607:f8b0:4864:20::232]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5E32241A10 for ; Thu, 17 Mar 2022 18:55:31 -0700 (PDT) Received: by mail-oi1-x232.google.com with SMTP id e4so4467427oif.2 for ; Thu, 17 Mar 2022 18:55:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1xptPxghO5Wv38XZBgv1n4noPRyDNrIm4QC80PPVwBo=; b=NCSI8ASfaJnabR36RGZYsF6iuUoeLNJSjrbNQjx/wsExiEi7G7oQk+H8yVehakCRAU i6NWT9KTH+Fp6RC4lbnd/eGplKU+jK0E2vzaW5VoLDFKaui/c2y6yBjbY7+JVBx0rLNt gno4Iyx5D+DVSgK1jCVf6puaMUMKGk3LEn3H80PAwifLkalN/RMhtSKcmTMbWB4+lkfF DoxKxp7VKNDF2Ja8LvTEGtMMRFbWjjLbFEnaS+gmdJAMOmSh4GViARlbDCNhx6t7uhIo 4I0gW+QlpIt31pgieK3DQHdNdehNefmdeqg6ZsNlLrMdbvHVAtyqtMEIsjzBGLftChOK +TOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1xptPxghO5Wv38XZBgv1n4noPRyDNrIm4QC80PPVwBo=; b=T+vTvhbPVJagKK2Z8DfxHIbh3kXMgZovimYdhlYwOe1v25GDGcyKKKiS2rOzmk3dGm CKV3gTe2tqeVR2rQfyOLQJb8w2P/7EChm3P41OQFV8IEDv6rjhPwHb1K0qgB+zMAB+w/ bYzrQJYLL4cn3iPVftY40uMeYvBWdk6poNvjDMDJjKVlHr9Ed4Yy3sU2oqBZvEiujfj8 +RmbW3ll67oYr2iathdTFsK9q7DBlJJGuPHS3udjbvOFDLx/oFlbsqWh8DfxUm6l3j+G 4xCfHMOD+OZElEiP9E/dqcLDBimDDWSXQBDBDrNNgji1ZuXx2OmZ0IE/TBi5zi3c6EH1 c/gQ== X-Gm-Message-State: AOAM531vtCwZYBy4bOgsLGYJNcP8aJ5yaxY/OYtqcj4POx25wUb84Imr 0ftdaFwcqSL+sL8eBoH8g5U= X-Google-Smtp-Source: ABdhPJxIWRzPzbPM9yFlVViO1YZyMXyo7f0E1Y3fiGAzXxuDYyBEzT72/RqSnXZWQ1HER5efgYRGVA== X-Received: by 2002:a05:6808:190e:b0:2da:226f:5ab with SMTP id bf14-20020a056808190e00b002da226f05abmr3443443oib.13.1647568531313; Thu, 17 Mar 2022 18:55:31 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-257e-2cb6-0a79-8c62.res6.spectrum.com. [2603:8081:140c:1a00:257e:2cb6:a79:8c62]) by smtp.googlemail.com with ESMTPSA id a32-20020a056870a1a000b000d458b1469dsm3292878oaf.10.2022.03.17.18.55.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Mar 2022 18:55:31 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v12 03/11] RDMA/rxe: Check rxe_get() return value Date: Thu, 17 Mar 2022 20:55:06 -0500 Message-Id: <20220318015514.231621-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220318015514.231621-1-rpearsonhpe@gmail.com> References: <20220318015514.231621-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org In the tasklets (completer, responder, and requester) check the return value from rxe_get() to detect failures to get a reference. This only occurs if the qp has had its reference count drop to zero which indicates that it no longer should be used. This is in preparation to an upcoming change that will move the qp cleanup code to rxe_qp_cleanup(). Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 3 ++- drivers/infiniband/sw/rxe/rxe_req.c | 3 ++- drivers/infiniband/sw/rxe/rxe_resp.c | 3 ++- 3 files changed, 6 insertions(+), 3 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index 138b3e7d3a5f..da3a398053b8 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -562,7 +562,8 @@ int rxe_completer(void *arg) enum comp_state state; int ret = 0; - rxe_get(qp); + if (!rxe_get(qp)) + return -EAGAIN; if (!qp->valid || qp->req.state == QP_STATE_ERROR || qp->req.state == QP_STATE_RESET) { diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index ae5fbc79dd5c..27aba921cc66 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -611,7 +611,8 @@ int rxe_requester(void *arg) struct rxe_ah *ah; struct rxe_av *av; - rxe_get(qp); + if (!rxe_get(qp)) + return -EAGAIN; next_wqe: if (unlikely(!qp->valid || qp->req.state == QP_STATE_ERROR)) diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 16fc7ea1298d..1ed45c192cf5 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -1250,7 +1250,8 @@ int rxe_responder(void *arg) struct rxe_pkt_info *pkt = NULL; int ret = 0; - rxe_get(qp); + if (!rxe_get(qp)) + return -EAGAIN; qp->resp.aeth_syndrome = AETH_ACK_UNLIMITED; From patchwork Fri Mar 18 01:55:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12784759 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B61B0C433FE for ; Fri, 18 Mar 2022 01:55:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231585AbiCRB45 (ORCPT ); Thu, 17 Mar 2022 21:56:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229730AbiCRB44 (ORCPT ); Thu, 17 Mar 2022 21:56:56 -0400 Received: from mail-oi1-x236.google.com (mail-oi1-x236.google.com [IPv6:2607:f8b0:4864:20::236]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E9C8241A2A for ; Thu, 17 Mar 2022 18:55:32 -0700 (PDT) Received: by mail-oi1-x236.google.com with SMTP id w127so7495953oig.10 for ; Thu, 17 Mar 2022 18:55:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qzyw0G+X5gEzfm3C8WMHFqx+zJvFjMkDqLfBwb9Xil8=; b=Uk2KJChsXoYk0CionU+czhOb8THcjG/qHD8nqklzPRIntD49CVZyG0kf3Q0VnxOrOP h+0SRKiNdzZSP6guW7SK6FgN+23ElVTySkvb9RH4+HUYpnIo8sHAkDfDdwgIwRqGsGyO 0I18L6VENfvBZ6knR8cNTo6F6aXoTExOY5w8mWayDA3OQm4gIYtHCVRCva+6CbnVY4gu cJX8HXf70eadhE1/s12KxXPrtzvE+8JghHkI0NxfSeJowOPtAEDwT6EnJ3NkX1Ay09V2 +o/EsXmIV0d5K/FuiMijtwqTJRuZV1Eye7nANFqxZo24lYC7mrThlzesKIN144EYxJUF uZ2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qzyw0G+X5gEzfm3C8WMHFqx+zJvFjMkDqLfBwb9Xil8=; b=g1IA+Q0nzWqJnLxdd1Wgu5w9qnzfgt1OPpjb7Z+K5qyVH33CAxL2ko23tm/XApgoMy dWURouAd9VkkIRpmXN6CCcoxXJhslsCYlEer/dzUH7jAMAFuNIFdW7RmZAzPnJnxM4oe tsOF1m3cIb2Xw3zLDa+HVBufW/ntZYCr2pUrXxtVMn4o3NwqLB9anRraIpHiiPsm4rb9 x9oDkRBIjwQUogEx5/OwmAf0mK8hgacqAmQqwrfT7LoErLGLG8Hadiw2g4Az7Kbjwupu nPryXpYlq4MAhyrhNDLgXBXxfKxQt3GaTF/d/IHaxEvQDxEXdSworqMqTrScIHUGo1K1 qdQQ== X-Gm-Message-State: AOAM530JvxjviwsO9TTuBJMeubioKS32fHhPvsN1TwmkhkqF7f8doR8h OfuzodNng2zr/CfmNXEPAil3VWBw23I= X-Google-Smtp-Source: ABdhPJxxIsxRE04DqMMkW7u1APjSk+VeePrK97aB8slgLs8765D2bVel8Bw7UCcd39+fDW444b1KbA== X-Received: by 2002:aca:1901:0:b0:2ec:b7d0:1207 with SMTP id l1-20020aca1901000000b002ecb7d01207mr7388585oii.30.1647568532018; Thu, 17 Mar 2022 18:55:32 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-257e-2cb6-0a79-8c62.res6.spectrum.com. [2603:8081:140c:1a00:257e:2cb6:a79:8c62]) by smtp.googlemail.com with ESMTPSA id a32-20020a056870a1a000b000d458b1469dsm3292878oaf.10.2022.03.17.18.55.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Mar 2022 18:55:31 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v12 04/11] RDMA/rxe: Move qp cleanup code to rxe_qp_do_cleanup() Date: Thu, 17 Mar 2022 20:55:07 -0500 Message-Id: <20220318015514.231621-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220318015514.231621-1-rpearsonhpe@gmail.com> References: <20220318015514.231621-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move the code from rxe_qp_destroy() to rxe_qp_do_cleanup(). This allows flows holding references to qp to complete before the qp object is torn down. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 1 - drivers/infiniband/sw/rxe/rxe_qp.c | 12 ++++-------- drivers/infiniband/sw/rxe/rxe_verbs.c | 1 - 3 files changed, 4 insertions(+), 10 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 300c702f432a..ddf91d3d5527 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -114,7 +114,6 @@ int rxe_qp_from_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int rxe_qp_to_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask); void rxe_qp_error(struct rxe_qp *qp); int rxe_qp_chk_destroy(struct rxe_qp *qp); -void rxe_qp_destroy(struct rxe_qp *qp); void rxe_qp_cleanup(struct rxe_pool_elem *elem); static inline int qp_num(struct rxe_qp *qp) diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 62acf890af6c..f5200777399c 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -777,9 +777,11 @@ int rxe_qp_chk_destroy(struct rxe_qp *qp) return 0; } -/* called by the destroy qp verb */ -void rxe_qp_destroy(struct rxe_qp *qp) +/* called when the last reference to the qp is dropped */ +static void rxe_qp_do_cleanup(struct work_struct *work) { + struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work); + qp->valid = 0; qp->qp_timeout_jiffies = 0; rxe_cleanup_task(&qp->resp.task); @@ -798,12 +800,6 @@ void rxe_qp_destroy(struct rxe_qp *qp) __rxe_do_task(&qp->comp.task); __rxe_do_task(&qp->req.task); } -} - -/* called when the last reference to the qp is dropped */ -static void rxe_qp_do_cleanup(struct work_struct *work) -{ - struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work); if (qp->sq.queue) rxe_queue_cleanup(qp->sq.queue); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 89f4f30f7247..9a3c33dad979 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -491,7 +491,6 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) if (ret) return ret; - rxe_qp_destroy(qp); rxe_put(qp); return 0; } From patchwork Fri Mar 18 01:55:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12784767 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5563C433EF for ; Fri, 18 Mar 2022 01:55:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231622AbiCRB5C (ORCPT ); Thu, 17 Mar 2022 21:57:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47382 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229730AbiCRB47 (ORCPT ); Thu, 17 Mar 2022 21:56:59 -0400 Received: from mail-oi1-x22d.google.com (mail-oi1-x22d.google.com [IPv6:2607:f8b0:4864:20::22d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8AD1C241B6E for ; Thu, 17 Mar 2022 18:55:33 -0700 (PDT) Received: by mail-oi1-x22d.google.com with SMTP id 12so7486162oix.12 for ; Thu, 17 Mar 2022 18:55:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ir1rbhJVm4YOuatC8nTnW07YQS3a/+GpADxu2BBd+7A=; b=FNRipG9CDMnaM6uTvDPD3IgPUmUEwVeEq6doaoHc4Yl1kaHYUZ2l5nZTRVo2g2gpqC b9M7dvjWuoxWkiHmd4kIiOA8mBIOPSqXuATlbYtW+dq5m1UVhfC4M60z1PG50zpi3ohu jqwwJRranXQn06mf18oeVhduRuoo429ctkeBn8KMru0NPQ3HrKeXC8ov1pSPPZmeFOtw x6Cj+jr35PQzY0rJtqnBSWMFROmxB2UrcT74TLX6aQ3LqMq4cNHienXAVV7hIZyfPFpF UO5Xx2PhR0DCrTSziYrEdeRCxlJoUGTXmQ7ejZNr3xHJRzZNShVKdkzZFdKCpafVdTt4 phpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ir1rbhJVm4YOuatC8nTnW07YQS3a/+GpADxu2BBd+7A=; b=VvseTGcY0ndYv+RTMmhxD9NegjuIpEydn4ULxk+eUyxo4bPAJ4YoSSU0yMX5DR7hsw AUj/ngbCf8+WrwmSoU5/iiRz5KPxy9RU3XeC0GpYtcBeUotjqtGj+yYymVIUy2xV9FXt BIE3X41/hJnV5KzJ1wy+jKyhiGRwgrNKHCFBzM+KWrsl8hkpjcih44roRAsGkjlovbI+ hECLZ0GEiMQd9W10FRlEY1My0W5AbBVl0ZJkq6R8xexC+Tc1H1kQpdanXgqf41k/OJaP zwZ9BgO7zINO2jJ15SqGd2uoFJ/+w17U9CxsCDA+SQTWNlaKC0rXxSpUg6HiPChfDqmt 0Jrg== X-Gm-Message-State: AOAM530ETo+SJdnG6jBCaQjxhyXTCdE1JrsJpVkuGDbADECKRitOB5tt ZLFWxBYpz8ZjyxIlTWJwwbo= X-Google-Smtp-Source: ABdhPJy7YB7H62QgmymhziVSnV22gROhua4mx4x/tngvRuBLmDU9Pn9IalnOWSNNMWRnOaSDGsd8ew== X-Received: by 2002:a05:6808:b19:b0:2ec:aefc:be5b with SMTP id s25-20020a0568080b1900b002ecaefcbe5bmr3062134oij.135.1647568532859; Thu, 17 Mar 2022 18:55:32 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-257e-2cb6-0a79-8c62.res6.spectrum.com. [2603:8081:140c:1a00:257e:2cb6:a79:8c62]) by smtp.googlemail.com with ESMTPSA id a32-20020a056870a1a000b000d458b1469dsm3292878oaf.10.2022.03.17.18.55.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Mar 2022 18:55:32 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v12 05/11] RDMA/rxe: Move mr cleanup code to rxe_mr_cleanup() Date: Thu, 17 Mar 2022 20:55:08 -0500 Message-Id: <20220318015514.231621-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220318015514.231621-1-rpearsonhpe@gmail.com> References: <20220318015514.231621-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move the code which tears down an mr to rxe_mr_cleanup to allow operations holding a reference to the mr to complete. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mr.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 60a31b718774..fc3942e04a1f 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -683,14 +683,10 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) { struct rxe_mr *mr = to_rmr(ibmr); - if (atomic_read(&mr->num_mw) > 0) { - pr_warn("%s: Attempt to deregister an MR while bound to MWs\n", - __func__); + /* See IBA 10.6.7.2.6 */ + if (atomic_read(&mr->num_mw) > 0) return -EINVAL; - } - mr->state = RXE_MR_STATE_INVALID; - rxe_put(mr_pd(mr)); rxe_put(mr); return 0; @@ -700,6 +696,8 @@ void rxe_mr_cleanup(struct rxe_pool_elem *elem) { struct rxe_mr *mr = container_of(elem, typeof(*mr), elem); + rxe_put(mr_pd(mr)); + ib_umem_release(mr->umem); if (mr->cur_map_set) From patchwork Fri Mar 18 01:55:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12784764 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 310AFC4332F for ; Fri, 18 Mar 2022 01:55:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231320AbiCRB5A (ORCPT ); Thu, 17 Mar 2022 21:57:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47336 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231651AbiCRB47 (ORCPT ); Thu, 17 Mar 2022 21:56:59 -0400 Received: from mail-oi1-x234.google.com (mail-oi1-x234.google.com [IPv6:2607:f8b0:4864:20::234]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C18F242216 for ; Thu, 17 Mar 2022 18:55:34 -0700 (PDT) Received: by mail-oi1-x234.google.com with SMTP id e4so4467484oif.2 for ; Thu, 17 Mar 2022 18:55:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NOY+T0iyFXtdgsKlS557JSXDm6hKgYpjdNQTGul5DK4=; b=YEcIC5b4iyA3p9uTCGPLHsYHaBDha98XOEx+HsdczdRN10h5TnkKQHBAzNDDdgigDj 9m9eHfKYhtzwTyiQ5j03WCwCQVD5ZcpCon8ajtwOVzQO3TBe+Au++B1+nMLqimiHRFSr WxjJjIDJXkfs6j+x5soWVsLsR3s2wCcsZW4cp43jEpEasYShR2gysl1iuMEMZSjfhJsj sAJ+D7O7tb7jJzNt7qsJJJvM/H6XSdCXQ+zTr/pqybjUs/q33MygQNpQLik5SnUJoiO8 lIZVFf5j8xxdPaeh+77whZnGcjXfMDBpcaMqPRoxBKb1O8QPbnUb32iR0f8VKjJL7FcD vB5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NOY+T0iyFXtdgsKlS557JSXDm6hKgYpjdNQTGul5DK4=; b=3IW+WvH3LbF3DEpT8j3wRtN9Q/TSQtr8fIRQCz6dZS1z39ffTk8T9962Dj5iUS1e8h 4JOjK5a82rqelg42PIj8O4Uq76YwzEwY5C+W6J8ES7dEkJ0EuVCSftEfTOzZ47XbLcVe 5O4Eay4553zlaiikSRt5jvniKVD9PgAbHo4Lzj9hZzTWim6IjCxnnS1BggyRIDPIi8iu dVqYFz0kiQ+jzcZctjil0rKYIqqkbZT3WefNabxeKDXRkZ0bQgZzfI+X6yGR1j709YZh b89fjmteVde4GKplKacQ+MPPnXApU//9cRjqCkXWr2B6/gXuPMKMHDkW5WTI+e6uk7Gl bUsA== X-Gm-Message-State: AOAM532gQckjRxRaHhhzLGcJ/R0sbdBOXjMGrLbmUFKQURo9g3q1Sv7C z0x9zlmPDwkR08Tz0hHQdKfpP3dRuww= X-Google-Smtp-Source: ABdhPJxcDhstYLHNZWEkBmUYfkDiO2cfmezBpmkVY4UWP78jLo+c+fowmDnIDHRhHLt5qKD7YMjQaA== X-Received: by 2002:a54:4e81:0:b0:2ec:ae99:e02d with SMTP id c1-20020a544e81000000b002ecae99e02dmr7728292oiy.261.1647568533572; Thu, 17 Mar 2022 18:55:33 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-257e-2cb6-0a79-8c62.res6.spectrum.com. [2603:8081:140c:1a00:257e:2cb6:a79:8c62]) by smtp.googlemail.com with ESMTPSA id a32-20020a056870a1a000b000d458b1469dsm3292878oaf.10.2022.03.17.18.55.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Mar 2022 18:55:33 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v12 06/11] RDMA/rxe: Move mw cleanup code to rxe_mw_cleanup() Date: Thu, 17 Mar 2022 20:55:09 -0500 Message-Id: <20220318015514.231621-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220318015514.231621-1-rpearsonhpe@gmail.com> References: <20220318015514.231621-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move code from rxe_dealloc_mw() to rxe_mw_cleanup() to allow flows which hold a reference to mw to complete. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mw.c | 57 ++++++++++++++-------------- drivers/infiniband/sw/rxe/rxe_pool.c | 1 + 2 files changed, 29 insertions(+), 29 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index c86b2efd58f2..ba3f94c69171 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -28,40 +28,11 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) return 0; } -static void rxe_do_dealloc_mw(struct rxe_mw *mw) -{ - if (mw->mr) { - struct rxe_mr *mr = mw->mr; - - mw->mr = NULL; - atomic_dec(&mr->num_mw); - rxe_put(mr); - } - - if (mw->qp) { - struct rxe_qp *qp = mw->qp; - - mw->qp = NULL; - rxe_put(qp); - } - - mw->access = 0; - mw->addr = 0; - mw->length = 0; - mw->state = RXE_MW_STATE_INVALID; -} - int rxe_dealloc_mw(struct ib_mw *ibmw) { struct rxe_mw *mw = to_rmw(ibmw); - struct rxe_pd *pd = to_rpd(ibmw->pd); - - spin_lock_bh(&mw->lock); - rxe_do_dealloc_mw(mw); - spin_unlock_bh(&mw->lock); rxe_put(mw); - rxe_put(pd); return 0; } @@ -328,3 +299,31 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey) return mw; } + +void rxe_mw_cleanup(struct rxe_pool_elem *elem) +{ + struct rxe_mw *mw = container_of(elem, typeof(*mw), elem); + struct rxe_pd *pd = to_rpd(mw->ibmw.pd); + + rxe_put(pd); + + if (mw->mr) { + struct rxe_mr *mr = mw->mr; + + mw->mr = NULL; + atomic_dec(&mr->num_mw); + rxe_put(mr); + } + + if (mw->qp) { + struct rxe_qp *qp = mw->qp; + + mw->qp = NULL; + rxe_put(qp); + } + + mw->access = 0; + mw->addr = 0; + mw->length = 0; + mw->state = RXE_MW_STATE_INVALID; +} diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 5963b1429ad8..0fdde3d46949 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -83,6 +83,7 @@ static const struct rxe_type_info { .name = "mw", .size = sizeof(struct rxe_mw), .elem_offset = offsetof(struct rxe_mw, elem), + .cleanup = rxe_mw_cleanup, .min_index = RXE_MIN_MW_INDEX, .max_index = RXE_MAX_MW_INDEX, .max_elem = RXE_MAX_MW_INDEX - RXE_MIN_MW_INDEX + 1, From patchwork Fri Mar 18 01:55:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12784761 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B025BC433F5 for ; Fri, 18 Mar 2022 01:55:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231449AbiCRB47 (ORCPT ); Thu, 17 Mar 2022 21:56:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47250 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229730AbiCRB46 (ORCPT ); Thu, 17 Mar 2022 21:56:58 -0400 Received: from mail-oo1-xc32.google.com (mail-oo1-xc32.google.com [IPv6:2607:f8b0:4864:20::c32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4A9C24223E for ; Thu, 17 Mar 2022 18:55:34 -0700 (PDT) Received: by mail-oo1-xc32.google.com with SMTP id j7-20020a4ad6c7000000b0031c690e4123so8641819oot.11 for ; Thu, 17 Mar 2022 18:55:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yxF2KUc6EO6kphwo4HlT4qqWAakQzhVgtzyDrqKl8JA=; b=jh8jCnZocMTq++PDjOrq3FhJSZroSMa8/vwqLJ7EcNyo+PQ++Ix5YuhQakpjlkmJQY UND2bhle9t2GGNYkX5Md6kNaGu+K8h3VpSC4UG2vng7uX8IkiUkVU8AJ6/LpYSvf9+zb 8djWNtOnStUVSV51vznybanBUR1jX5Ir0cepdL2o+fDCGdgrieo+S220ihQF7BizwNpH 3MDz7Qj2i5w0ocnnYZE5M5sipnocWFq+t4M7hGSwimVXqIUf1e3Mx8GQF8KR0P7Tv88Q hueSpeUp/Onio9j0OLAgcQhFyqIUuCMH1Z6uZDGSMIfqVvPY1a1+tLsWF+ssGtS05sYz /Qqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yxF2KUc6EO6kphwo4HlT4qqWAakQzhVgtzyDrqKl8JA=; b=NxNXZSqeXak2wEDB91oOt8RsPNSK2rqtTQhT1I2NfYkYvrl2ZMU3Vf0fbZ64wjL2qp yrBRBik44FTMcQHCgiX3i1NPp2WCbEoWmCDq4pWLabK2UICdXb1ntXzwmyZ5jT6v8FKY yPdhKSqFLBBc4JWD1ZUEKS776a6rh30KuVVuop6QQPLbJ5HxLeGNVC5GgMXtch6U/4xC E2q1JCdaslo05KJ3y88F9Ay06jZevSc99ZSLw5NOap8Ix1umUfWzVnl8ACUFotEYCSHq 3K4VnkkXFoLeQltkO5FVnrdKFRakEf3ZSnftEZPOG0Q4LFVveAAy4U67ccYdfHc3hfnc ySgQ== X-Gm-Message-State: AOAM53288tclBQO6lxK5kdvzy7sLpSRA/jhmjIUfz5yiREFuDDvVYjIU 0E3O8HmOxpIPtXQGPibQsxE= X-Google-Smtp-Source: ABdhPJyTbcXT575WAelXOptxhAfWOn4mVJU9HiaTPTgtx/0Zx5r8yxYfEWnhILxXZKmW1lCyWVo1Lg== X-Received: by 2002:a05:6870:231f:b0:da:c15:fd43 with SMTP id w31-20020a056870231f00b000da0c15fd43mr2695069oao.135.1647568534270; Thu, 17 Mar 2022 18:55:34 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-257e-2cb6-0a79-8c62.res6.spectrum.com. [2603:8081:140c:1a00:257e:2cb6:a79:8c62]) by smtp.googlemail.com with ESMTPSA id a32-20020a056870a1a000b000d458b1469dsm3292878oaf.10.2022.03.17.18.55.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Mar 2022 18:55:34 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v12 07/11] RDMA/rxe: Enforce IBA C11-17 Date: Thu, 17 Mar 2022 20:55:10 -0500 Message-Id: <20220318015514.231621-8-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220318015514.231621-1-rpearsonhpe@gmail.com> References: <20220318015514.231621-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add a counter to keep track of the number of WQs connected to a CQ and return an error if destroy_cq is called while the counter is non zero. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_qp.c | 10 ++++++++++ drivers/infiniband/sw/rxe/rxe_verbs.c | 6 ++++++ drivers/infiniband/sw/rxe/rxe_verbs.h | 1 + 3 files changed, 17 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index f5200777399c..18861b9edbfd 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -334,6 +334,9 @@ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, qp->scq = scq; qp->srq = srq; + atomic_inc(&rcq->num_wq); + atomic_inc(&scq->num_wq); + rxe_qp_init_misc(rxe, qp, init); err = rxe_qp_init_req(rxe, qp, init, udata, uresp); @@ -353,6 +356,9 @@ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, rxe_queue_cleanup(qp->sq.queue); qp->sq.queue = NULL; err1: + atomic_dec(&rcq->num_wq); + atomic_dec(&scq->num_wq); + qp->pd = NULL; qp->rcq = NULL; qp->scq = NULL; @@ -810,10 +816,14 @@ static void rxe_qp_do_cleanup(struct work_struct *work) if (qp->rq.queue) rxe_queue_cleanup(qp->rq.queue); + atomic_dec(&qp->scq->num_wq); if (qp->scq) rxe_put(qp->scq); + + atomic_dec(&qp->rcq->num_wq); if (qp->rcq) rxe_put(qp->rcq); + if (qp->pd) rxe_put(qp->pd); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 9a3c33dad979..4c082ac439c6 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -802,6 +802,12 @@ static int rxe_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata) { struct rxe_cq *cq = to_rcq(ibcq); + /* See IBA C11-17: The CI shall return an error if this Verb is + * invoked while a Work Queue is still associated with the CQ. + */ + if (atomic_read(&cq->num_wq)) + return -EINVAL; + rxe_cq_disable(cq); rxe_put(cq); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 34aa013c7801..5764aeed921a 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -67,6 +67,7 @@ struct rxe_cq { bool is_dying; bool is_user; struct tasklet_struct comp_task; + atomic_t num_wq; }; enum wqe_state { From patchwork Fri Mar 18 01:55:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12784763 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AC6FC433FE for ; Fri, 18 Mar 2022 01:55:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231670AbiCRB47 (ORCPT ); Thu, 17 Mar 2022 21:56:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47316 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231635AbiCRB47 (ORCPT ); Thu, 17 Mar 2022 21:56:59 -0400 Received: from mail-ot1-x329.google.com (mail-ot1-x329.google.com [IPv6:2607:f8b0:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E9AB243700 for ; Thu, 17 Mar 2022 18:55:35 -0700 (PDT) Received: by mail-ot1-x329.google.com with SMTP id t8-20020a0568301e2800b005b235a56f2dso4732783otr.9 for ; Thu, 17 Mar 2022 18:55:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+SdF1uWi4XJsKk+Knhny+0qCi8cPiaW1ef6UDKr5t3o=; b=MErwI4FrxF0aWEhq1oMYAU56RtzMRpUcJSErdKfRSnyUlcrDVYXBpnb6/Byt5M4lfU 4dP3WGtGUD8NzKTXNkTHIwgq79T4dZhnAFdaiqEoS+m7CzGdUZjm0lFvmNUme8wfYKIf 86bm4psv55/IBYh8F/325oz8MEAq+jS8ajmdpY1/VDnF5k6MBW+LOvfgwA9fd9t+2ln/ NrQ6jeqhHi7QU6UJ+0ex9Ci9SBru46bm7Ptth35sM2m206hAubWXW2qryb85LtgoAph5 7GRr2d0UZ3Je6rDJumxahPNGWlYKSjfkmuw4mwi39TqxwG7bZjOx+2xr+rn7sO1ZLP6R 4ZHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+SdF1uWi4XJsKk+Knhny+0qCi8cPiaW1ef6UDKr5t3o=; b=zZ0y1AF2NSWKrqHQFxCiprWZR6o3FXNRznu8wUW+CO9u6FFOymc8ocGF53vlcCz5nT n285gVjQWckrtRVRWqvXDwqib/czor9iEATwqsEHMIqrTj10mRdS3nk3HkOEAV1Slewk u3afbUgIc0MF2cTXikSMO1xh0gIKIOADVxH3rAGn8qsNrttZTpg9P2r5FTZRMdXQSiiI xUstOO8VFw2SsCG+9orRXHOxJOFBuEh2987qiJgpxKkvNvtR4M1QDwSTEtTGCDSqaCjw n56mnx5gie9cyMxYx2shrRilVL0oGuajUG5pbhldfsJOqjwQaSISGWzZVaruCR0DQEbt H/oQ== X-Gm-Message-State: AOAM531GH5JLwzGGzURCDzYZh3kzW6afFrKxn95g+hbaa6aBoZPaBwTK kuORklgBTjOg70VwXOfN0Zo= X-Google-Smtp-Source: ABdhPJwLhsPCArY0Kx6h4iFy0FmSSn1w97gEaNHyDEQPyY1VzGoLDLWkLQKyweA3KZUxOG4gufQjCQ== X-Received: by 2002:a9d:77d7:0:b0:5b2:29b0:70cb with SMTP id w23-20020a9d77d7000000b005b229b070cbmr2514561otl.276.1647568534925; Thu, 17 Mar 2022 18:55:34 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-257e-2cb6-0a79-8c62.res6.spectrum.com. [2603:8081:140c:1a00:257e:2cb6:a79:8c62]) by smtp.googlemail.com with ESMTPSA id a32-20020a056870a1a000b000d458b1469dsm3292878oaf.10.2022.03.17.18.55.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Mar 2022 18:55:34 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v12 08/11] RDMA/rxe: Stop lookup of partially built objects Date: Thu, 17 Mar 2022 20:55:11 -0500 Message-Id: <20220318015514.231621-9-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220318015514.231621-1-rpearsonhpe@gmail.com> References: <20220318015514.231621-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently the rdma_rxe driver has a security weakness due to giving objects which are partially initialized indices allowing external actors to gain access to them by sending packets which refer to their index (e.g. qpn, rkey, etc) causing unpredictable results. This patch adds two new APIs rxe_finalize(obj) and rxe_disable_lookup(obj) which enable or disable looking up pool objects from indices using rxe_pool_get_index(). By default objects are disabled. These APIs are used to enable looking up objects which have indices: AH, QP, MR, and MW. They are added in create verbs after the objects are fully initialized and as soon as possible in destroy verbs. Note, the sequence rxe_disable_lookup(obj); rxe_put(obj); in destroy verbs stops the looking up of objects by clearing the pointer in the xarray with __xa_store() before a possible long wait until the last reference is dropped somewhere else and the xarray entry is erased with xa_erase(). Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mr.c | 1 + drivers/infiniband/sw/rxe/rxe_mw.c | 3 +++ drivers/infiniband/sw/rxe/rxe_pool.c | 36 ++++++++++++++++++++++++--- drivers/infiniband/sw/rxe/rxe_pool.h | 9 ++++--- drivers/infiniband/sw/rxe/rxe_verbs.c | 10 ++++++++ 5 files changed, 53 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index fc3942e04a1f..8059f31882ae 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -687,6 +687,7 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) if (atomic_read(&mr->num_mw) > 0) return -EINVAL; + rxe_disable_lookup(mr); rxe_put(mr); return 0; diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index ba3f94c69171..2464952a04d4 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -25,6 +25,8 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) RXE_MW_STATE_FREE : RXE_MW_STATE_VALID; spin_lock_init(&mw->lock); + rxe_finalize(mw); + return 0; } @@ -32,6 +34,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw) { struct rxe_mw *mw = to_rmw(ibmw); + rxe_disable_lookup(mw); rxe_put(mw); return 0; diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 0fdde3d46949..87b89d263f80 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -140,7 +140,9 @@ void *rxe_alloc(struct rxe_pool *pool) elem->obj = obj; kref_init(&elem->ref_cnt); - err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit, + /* allocate index in array but leave pointer as NULL so it + * can't be looked up until rxe_finalize() is called */ + err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit, &pool->next, GFP_KERNEL); if (err) goto err_free; @@ -168,7 +170,7 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) elem->obj = (u8 *)elem - pool->elem_offset; kref_init(&elem->ref_cnt); - err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit, + err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit, &pool->next, GFP_KERNEL); if (err) goto err_cnt; @@ -202,8 +204,12 @@ static void rxe_elem_release(struct kref *kref) { struct rxe_pool_elem *elem = container_of(kref, typeof(*elem), ref_cnt); struct rxe_pool *pool = elem->pool; + struct xarray *xa = &pool->xa; + unsigned long flags; - xa_erase(&pool->xa, elem->index); + xa_lock_irqsave(xa, flags); + __xa_erase(&pool->xa, elem->index); + xa_unlock_irqrestore(xa, flags); if (pool->cleanup) pool->cleanup(elem); @@ -223,3 +229,27 @@ int __rxe_put(struct rxe_pool_elem *elem) { return kref_put(&elem->ref_cnt, rxe_elem_release); } + +void __rxe_finalize(struct rxe_pool_elem *elem) +{ + struct xarray *xa = &elem->pool->xa; + unsigned long flags; + void *ret; + + xa_lock_irqsave(xa, flags); + ret = __xa_store(&elem->pool->xa, elem->index, elem, GFP_KERNEL); + xa_unlock_irqrestore(xa, flags); + WARN_ON(xa_err(ret)); +} + +void __rxe_disable_lookup(struct rxe_pool_elem *elem) +{ + struct xarray *xa = &elem->pool->xa; + unsigned long flags; + void *ret; + + xa_lock_irqsave(xa, flags); + ret = __xa_store(&elem->pool->xa, elem->index, NULL, GFP_KERNEL); + xa_unlock_irqrestore(xa, flags); + WARN_ON(xa_err(ret)); +} diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 24bcc786c1b3..aa66d0eea13b 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -63,20 +63,23 @@ void *rxe_alloc(struct rxe_pool *pool); /* connect already allocated object to pool */ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem); - #define rxe_add_to_pool(pool, obj) __rxe_add_to_pool(pool, &(obj)->elem) /* lookup an indexed object from index. takes a reference on object */ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index); int __rxe_get(struct rxe_pool_elem *elem); - #define rxe_get(obj) __rxe_get(&(obj)->elem) int __rxe_put(struct rxe_pool_elem *elem); - #define rxe_put(obj) __rxe_put(&(obj)->elem) #define rxe_read(obj) kref_read(&(obj)->elem.ref_cnt) +void __rxe_finalize(struct rxe_pool_elem *elem); +#define rxe_finalize(obj) __rxe_finalize(&(obj)->elem) + +void __rxe_disable_lookup(struct rxe_pool_elem *elem); +#define rxe_disable_lookup(obj) __rxe_disable_lookup(&(obj)->elem) + #endif /* RXE_POOL_H */ diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 4c082ac439c6..6a83b4a630f5 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -197,6 +197,8 @@ static int rxe_create_ah(struct ib_ah *ibah, } rxe_init_av(init_attr->ah_attr, &ah->av); + rxe_finalize(ah); + return 0; } @@ -228,6 +230,7 @@ static int rxe_destroy_ah(struct ib_ah *ibah, u32 flags) { struct rxe_ah *ah = to_rah(ibah); + rxe_disable_lookup(ah); rxe_put(ah); return 0; } @@ -435,6 +438,7 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, if (err) goto qp_init; + rxe_finalize(qp); return 0; qp_init: @@ -487,6 +491,7 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) struct rxe_qp *qp = to_rqp(ibqp); int ret; + rxe_disable_lookup(qp); ret = rxe_qp_chk_destroy(qp); if (ret) return ret; @@ -905,6 +910,7 @@ static struct ib_mr *rxe_get_dma_mr(struct ib_pd *ibpd, int access) rxe_get(pd); rxe_mr_init_dma(pd, access, mr); + rxe_finalize(mr); return &mr->ibmr; } @@ -933,6 +939,8 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, if (err) goto err3; + rxe_finalize(mr); + return &mr->ibmr; err3: @@ -965,6 +973,8 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, if (err) goto err2; + rxe_finalize(mr); + return &mr->ibmr; err2: From patchwork Fri Mar 18 01:55:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12784765 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7294DC43217 for ; Fri, 18 Mar 2022 01:55:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230387AbiCRB5B (ORCPT ); Thu, 17 Mar 2022 21:57:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231645AbiCRB47 (ORCPT ); Thu, 17 Mar 2022 21:56:59 -0400 Received: from mail-oo1-xc35.google.com (mail-oo1-xc35.google.com [IPv6:2607:f8b0:4864:20::c35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B02C243717 for ; Thu, 17 Mar 2022 18:55:36 -0700 (PDT) Received: by mail-oo1-xc35.google.com with SMTP id n5-20020a4a9545000000b0031d45a442feso8681992ooi.3 for ; Thu, 17 Mar 2022 18:55:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8GyaG10dq4dfmLl1n9Jf2FobwfXCkgPf5ysdqlf0L0Q=; b=pMU4jz0RimiBXL+VgOXCMFXP8Y/V7dYSQAV82fitrR16PBPgKcrjqGK2yKKsZ3HboR W5o/DW/rgMca7JSBxVTRfkXnkr9kj66GOAMZKAs/EKddf0MoBDz9Scm7611/4KazHyKe EHrwbzELebyFs0hx35GMsUR9GA2Rzfj/mY5HSS/oRf+icPDbzcyUWibdQT30OItuHYaP zW0KjZiXtxermNZjOMdGKWhJ1PWIXOo6JJhTqbRdu/lD0JS7IlUkCzPKH4AKHbBC7act j8bN1dN51U/ufFQMkD7IbiEC1pvUXiJdNpz2xiYnKa22J4VYFu5rmssxZuktklSHqQ6T AdRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8GyaG10dq4dfmLl1n9Jf2FobwfXCkgPf5ysdqlf0L0Q=; b=kFmSO6oMZt7PnfM35YUyQy2XnNstD3AdRfREdfcpfmax37ydcqYa5fn9dd+rUMBGXo RuAhbl9VMbc3FCazzk1nWsJh0O8wiv+MGloi+Bd1dTcTx7zwWDMjYWO36ULv3/YWAi+w w4TA3Gjaq8dgKv/qWY/2ppQHc8x7QSuf6qdLDUVEH3VkXKbw0ONpkui5AmEWoE3BqPRR rHkMxddCpFhmwfdsIC13mW9ZVN4s8RTXUPW+c36HLPN8KbTGUQvYVFZLmPpWtQBaOmnY wBEFFKTDaJjQ5Lv3usxVTRxJaIJXuDSXkoyVLtW/XSTkbKeSNbRVANHF7WvxllLOfobC vnoQ== X-Gm-Message-State: AOAM530cH9lE2pGYtLIWWtcjt0VVS5Y9hzyRWopkIpMYXECwBhoKg/eD rvRz2CAr7szCfrxAzbJ3hcbrZSudB5Y= X-Google-Smtp-Source: ABdhPJyrq5PX0d4fHLaoN4eNr1vb7KkKEKxE5WOqiUGNKNtsd12z8tIjuKj3A68xSkOYde7Sy/zFLg== X-Received: by 2002:a05:6870:d10b:b0:dd:b6ec:2c90 with SMTP id e11-20020a056870d10b00b000ddb6ec2c90mr2806872oac.261.1647568535642; Thu, 17 Mar 2022 18:55:35 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-257e-2cb6-0a79-8c62.res6.spectrum.com. [2603:8081:140c:1a00:257e:2cb6:a79:8c62]) by smtp.googlemail.com with ESMTPSA id a32-20020a056870a1a000b000d458b1469dsm3292878oaf.10.2022.03.17.18.55.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Mar 2022 18:55:35 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v12 09/11] RDMA/rxe: Add wait_for_completion to pool objects Date: Thu, 17 Mar 2022 20:55:12 -0500 Message-Id: <20220318015514.231621-10-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220318015514.231621-1-rpearsonhpe@gmail.com> References: <20220318015514.231621-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add wait for completion to destroy/dealloc verbs to assure that all references have been dropped before returning to rdma_core. Implement a new rxe_pool API rxe_cleanup() which drops a reference to the object and then waits for all other references to be dropped. After that it cleans up the object and if locally allocated frees the memory. Combined with deferring cleanup code to type specific cleanup routines this allows all pending activity referring to objects to complete before returning to rdma_core. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mr.c | 2 +- drivers/infiniband/sw/rxe/rxe_mw.c | 2 +- drivers/infiniband/sw/rxe/rxe_pool.c | 29 +++++++++++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_pool.h | 5 +++++ drivers/infiniband/sw/rxe/rxe_verbs.c | 24 +++++++++++----------- 5 files changed, 48 insertions(+), 14 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 8059f31882ae..94416e76c5d9 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -688,7 +688,7 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) return -EINVAL; rxe_disable_lookup(mr); - rxe_put(mr); + rxe_cleanup(mr); return 0; } diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 2464952a04d4..f439548c8945 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -35,7 +35,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw) struct rxe_mw *mw = to_rmw(ibmw); rxe_disable_lookup(mw); - rxe_put(mw); + rxe_cleanup(mw); return 0; } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 87b89d263f80..6f39f74233d2 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -6,6 +6,8 @@ #include "rxe.h" +#define RXE_POOL_TIMEOUT (200) +#define RXE_POOL_MAX_TIMEOUTS (3) #define RXE_POOL_ALIGN (16) static const struct rxe_type_info { @@ -139,6 +141,7 @@ void *rxe_alloc(struct rxe_pool *pool) elem->pool = pool; elem->obj = obj; kref_init(&elem->ref_cnt); + init_completion(&elem->complete); /* allocate index in array but leave pointer as NULL so it * can't be looked up until rxe_finalize() is called */ @@ -169,6 +172,7 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) elem->pool = pool; elem->obj = (u8 *)elem - pool->elem_offset; kref_init(&elem->ref_cnt); + init_completion(&elem->complete); err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit, &pool->next, GFP_KERNEL); @@ -211,6 +215,29 @@ static void rxe_elem_release(struct kref *kref) __xa_erase(&pool->xa, elem->index); xa_unlock_irqrestore(xa, flags); + complete(&elem->complete); +} + +int __rxe_cleanup(struct rxe_pool_elem *elem) +{ + struct rxe_pool *pool = elem->pool; + static int timeout = RXE_POOL_TIMEOUT; + int ret, err = 0; + + __rxe_put(elem); + + if (timeout) { + ret = wait_for_completion_timeout(&elem->complete, timeout); + if (!ret) { + pr_warn("Timed out waiting for %s#%d to complete\n", + pool->name, elem->index); + if (++pool->timeouts >= RXE_POOL_MAX_TIMEOUTS) + timeout = 0; + + err = -EINVAL; + } + } + if (pool->cleanup) pool->cleanup(elem); @@ -218,6 +245,8 @@ static void rxe_elem_release(struct kref *kref) kfree(elem->obj); atomic_dec(&pool->num_elem); + + return err; } int __rxe_get(struct rxe_pool_elem *elem) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index aa66d0eea13b..3b2c80d5345a 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -28,6 +28,7 @@ struct rxe_pool_elem { void *obj; struct kref ref_cnt; struct list_head list; + struct completion complete; u32 index; }; @@ -37,6 +38,7 @@ struct rxe_pool { void (*cleanup)(struct rxe_pool_elem *elem); enum rxe_pool_flags flags; enum rxe_elem_type type; + unsigned int timeouts; unsigned int max_elem; atomic_t num_elem; @@ -74,6 +76,9 @@ int __rxe_get(struct rxe_pool_elem *elem); int __rxe_put(struct rxe_pool_elem *elem); #define rxe_put(obj) __rxe_put(&(obj)->elem) +int __rxe_cleanup(struct rxe_pool_elem *elem); +#define rxe_cleanup(obj) __rxe_cleanup(&(obj)->elem) + #define rxe_read(obj) kref_read(&(obj)->elem.ref_cnt) void __rxe_finalize(struct rxe_pool_elem *elem); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 6a83b4a630f5..a08d7b0c6221 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -115,7 +115,7 @@ static void rxe_dealloc_ucontext(struct ib_ucontext *ibuc) { struct rxe_ucontext *uc = to_ruc(ibuc); - rxe_put(uc); + rxe_cleanup(uc); } static int rxe_port_immutable(struct ib_device *dev, u32 port_num, @@ -149,7 +149,7 @@ static int rxe_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata) { struct rxe_pd *pd = to_rpd(ibpd); - rxe_put(pd); + rxe_cleanup(pd); return 0; } @@ -188,7 +188,7 @@ static int rxe_create_ah(struct ib_ah *ibah, err = copy_to_user(&uresp->ah_num, &ah->ah_num, sizeof(uresp->ah_num)); if (err) { - rxe_put(ah); + rxe_cleanup(ah); return -EFAULT; } } else if (ah->is_user) { @@ -231,7 +231,7 @@ static int rxe_destroy_ah(struct ib_ah *ibah, u32 flags) struct rxe_ah *ah = to_rah(ibah); rxe_disable_lookup(ah); - rxe_put(ah); + rxe_cleanup(ah); return 0; } @@ -316,7 +316,7 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, return 0; err_cleanup: - rxe_put(srq); + rxe_cleanup(srq); err_out: return err; } @@ -371,7 +371,7 @@ static int rxe_destroy_srq(struct ib_srq *ibsrq, struct ib_udata *udata) { struct rxe_srq *srq = to_rsrq(ibsrq); - rxe_put(srq); + rxe_cleanup(srq); return 0; } @@ -442,7 +442,7 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, return 0; qp_init: - rxe_put(qp); + rxe_cleanup(qp); return err; } @@ -491,12 +491,12 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) struct rxe_qp *qp = to_rqp(ibqp); int ret; - rxe_disable_lookup(qp); ret = rxe_qp_chk_destroy(qp); if (ret) return ret; - rxe_put(qp); + rxe_disable_lookup(qp); + rxe_cleanup(qp); return 0; } @@ -815,7 +815,7 @@ static int rxe_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata) rxe_cq_disable(cq); - rxe_put(cq); + rxe_cleanup(cq); return 0; } @@ -945,7 +945,7 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, err3: rxe_put(pd); - rxe_put(mr); + rxe_cleanup(mr); err2: return ERR_PTR(err); } @@ -979,7 +979,7 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, err2: rxe_put(pd); - rxe_put(mr); + rxe_cleanup(mr); err1: return ERR_PTR(err); } From patchwork Fri Mar 18 01:55:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12784766 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCC60C433F5 for ; Fri, 18 Mar 2022 01:55:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231674AbiCRB5B (ORCPT ); Thu, 17 Mar 2022 21:57:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231665AbiCRB47 (ORCPT ); Thu, 17 Mar 2022 21:56:59 -0400 Received: from mail-ot1-x329.google.com (mail-ot1-x329.google.com [IPv6:2607:f8b0:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E117024371F for ; Thu, 17 Mar 2022 18:55:36 -0700 (PDT) Received: by mail-ot1-x329.google.com with SMTP id e25-20020a0568301e5900b005b236d5d74fso4782873otj.0 for ; Thu, 17 Mar 2022 18:55:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=x/XVrqL1j/fKwDAY2wInSzzWzgO8c6dYaexgt/1KeDA=; b=AM06jl0kMvxtLaPx2IUYL52JiYPZt7LkFzsP7h/VdcSV6d3DYPZVl0BCEDcfnUb0H+ x7sZS0AbQlEbZG7Y6Y5za4B21FnEajM3Yv/v9OSKyMqoS2v05LDLdkIAioSFTogLICj3 ofZCQcxBEAD5X8fNyDFfsQhZW7IF3/QKkXpUtsO/3KlVOpNnytFM1v1hZJ4+qxEoj8sX vFCLIOVwZjq39xhklgvdnyvDSxHkXykCyo2dTe48SPuPy978NgwKZoNhv9GDzP9hGnbL 9H61UHuwrPiTL3SaSYc0ggS6xO5KYrc+cZ6z6F59lKQ+i32el9SwrefbBKqIHC16eaRy FWSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=x/XVrqL1j/fKwDAY2wInSzzWzgO8c6dYaexgt/1KeDA=; b=2SYRQiYYxyDZ3VjOfD1bWWBlCT8xWpoZP9BRCX5ThkdcZupqX+NOY1j5bFqRWCX//U ewXSvYa7w5+t1vJcjiDb+/9LH6GUE2VELW1Hx7I7Q1gIzLVAuuRXODC8dlIxZVqTLCaW lfM5lS5+DcCJSMqhSADTM/mXt5kAdfEHvHZ76nrKA2H50vawKpn7pacBl6QxlBXvNolN UlWOuvSUnssvlceG+OcE58mhpjYwtKwrvEuB3pnbUaA49VWIq3v/32MVjh73BawxdiVt 7Yvzt3X0qqASst26PhNwRDVVHLX8LvCRdR/gfMtSZpQBlclKRxTv6/VIePpOQzlziyA9 ndkw== X-Gm-Message-State: AOAM530kRAE56sU21L19BnOGAWfuiNXAo63ec14xvRZagtosi3y2MMDE 232ixHErSS147kzUzRVqQIU= X-Google-Smtp-Source: ABdhPJxe9LbC26+AJyCNGBtz2icSYTQEEZCfJUVA34aP1fDmkyA/etTziXwMyaGkeRXRJd6m8OrZRg== X-Received: by 2002:a9d:6f07:0:b0:5b2:38e8:41f7 with SMTP id n7-20020a9d6f07000000b005b238e841f7mr2629574otq.308.1647568536330; Thu, 17 Mar 2022 18:55:36 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-257e-2cb6-0a79-8c62.res6.spectrum.com. [2603:8081:140c:1a00:257e:2cb6:a79:8c62]) by smtp.googlemail.com with ESMTPSA id a32-20020a056870a1a000b000d458b1469dsm3292878oaf.10.2022.03.17.18.55.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Mar 2022 18:55:36 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v12 10/11] RDMA/rxe: Convert read side locking to rcu Date: Thu, 17 Mar 2022 20:55:13 -0500 Message-Id: <20220318015514.231621-11-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220318015514.231621-1-rpearsonhpe@gmail.com> References: <20220318015514.231621-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Use rcu_read_lock() for protecting read side operations in rxe_pool.c. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 6f39f74233d2..a2c74beceeae 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -190,16 +190,15 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) { struct rxe_pool_elem *elem; struct xarray *xa = &pool->xa; - unsigned long flags; void *obj; - xa_lock_irqsave(xa, flags); + rcu_read_lock(); elem = xa_load(xa, index); if (elem && kref_get_unless_zero(&elem->ref_cnt)) obj = elem->obj; else obj = NULL; - xa_unlock_irqrestore(xa, flags); + rcu_read_unlock(); return obj; } @@ -242,7 +241,7 @@ int __rxe_cleanup(struct rxe_pool_elem *elem) pool->cleanup(elem); if (pool->flags & RXE_POOL_ALLOC) - kfree(elem->obj); + kfree_rcu(elem->obj); atomic_dec(&pool->num_elem); From patchwork Fri Mar 18 01:55:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12784768 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6C83C43219 for ; Fri, 18 Mar 2022 01:55:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231665AbiCRB5C (ORCPT ); Thu, 17 Mar 2022 21:57:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47394 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231669AbiCRB47 (ORCPT ); Thu, 17 Mar 2022 21:56:59 -0400 Received: from mail-oo1-xc2c.google.com (mail-oo1-xc2c.google.com [IPv6:2607:f8b0:4864:20::c2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1459228D19 for ; Thu, 17 Mar 2022 18:55:37 -0700 (PDT) Received: by mail-oo1-xc2c.google.com with SMTP id g5-20020a4ae885000000b003240bc9b2afso8640155ooe.10 for ; Thu, 17 Mar 2022 18:55:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zYawtwpv6Y9qKlUQoHPjSDM4jpJIEG2N3SaK8Pipadg=; b=pvgrlSgtMVo6rjRJWxcQnG86cYKivVMowewnJ5pfvFJqe/9VNLh+x3wPdzx51t9Fc6 aQLP3Zjxzug7zJ5cwjdj2A40FaF33FO1L6k3l4tkpijSa6vOj71p64SbLXeYzMlJd+gQ i93MeBCVEK4r7iKIBImh0BH1bkxMAt2qFEmnRrfV4uPBLdAn9FYeoPYa28AA/lYkd7dW d6+OS8guL6XTM8QkHkjrWOTwy0FIspBj+b1rg9/ImjdJOn1abmUTEq/k49URzWIX3C0g 96L/acGEnU17rSu3Fwv1b3ZN0O67y6pcTOl+H9z2t0zHJO6y8n4vg88cTsWv9ecbPKVI 1rjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zYawtwpv6Y9qKlUQoHPjSDM4jpJIEG2N3SaK8Pipadg=; b=cW9rELKPRTlVaOLimMPSvoRH1pmps+RTMWJdsTzjx2KS4UuvNRt1Qk0k1b0cKkwog3 QN26ja/4FbyBP2JTj+CH01S4/A+QXBeeYeqvH70RxFb/t8unadencPtWPPfsHo8WvDpM uCqEs6lEOB9hVxafjFm0T47lFf9T1B1Pz9v3Dqaw3luMKq5wspoUpJbpQdGFt92cYtb5 O7E6lUPFXgilfAeaIaFQEOpGCKtiNwjtQF3tNvzsPq4Zd6R+s0mL2RmAq2Mq0sLnsW2z zjtVmjCey4bPuyBuqoHgD/H4vNB0EJSt9VC2LTrUQFxiDvCe/jWGRsL0z7qvLbrQbcHC +Msw== X-Gm-Message-State: AOAM5322o4ycoE3sfDvvGMtOY2Po6dS62eP8JBwqqR5X8Mk5eSrOqG+K 67L1v4U2fv21O0TL0qUoLGM= X-Google-Smtp-Source: ABdhPJw4uDUgN9KwHwqDotLj+dFmDG6nJ4qHZn4ET66id++/dce64Q1THOz431hMv/enFGLE5Nq33w== X-Received: by 2002:a05:6870:d38c:b0:dd:a1f6:2982 with SMTP id k12-20020a056870d38c00b000dda1f62982mr4811807oag.91.1647568537158; Thu, 17 Mar 2022 18:55:37 -0700 (PDT) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-257e-2cb6-0a79-8c62.res6.spectrum.com. [2603:8081:140c:1a00:257e:2cb6:a79:8c62]) by smtp.googlemail.com with ESMTPSA id a32-20020a056870a1a000b000d458b1469dsm3292878oaf.10.2022.03.17.18.55.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Mar 2022 18:55:36 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v12 11/11] RDMA/rxe: Cleanup rxe_pool.c Date: Thu, 17 Mar 2022 20:55:14 -0500 Message-Id: <20220318015514.231621-12-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220318015514.231621-1-rpearsonhpe@gmail.com> References: <20220318015514.231621-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Minor cleanup of rxe_pool.c. Add document comment headers for the subroutines. Increase alignment for pool elements. Convert some printk's to WARN-ON's. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 81 ++++++++++++++++++++++------ 1 file changed, 66 insertions(+), 15 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index a2c74beceeae..268757a106ce 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -8,7 +8,7 @@ #define RXE_POOL_TIMEOUT (200) #define RXE_POOL_MAX_TIMEOUTS (3) -#define RXE_POOL_ALIGN (16) +#define RXE_POOL_ALIGN (64) static const struct rxe_type_info { const char *name; @@ -120,24 +120,35 @@ void rxe_pool_cleanup(struct rxe_pool *pool) WARN_ON(!xa_empty(&pool->xa)); } +/** + * rxe_alloc - allocate a new pool object + * @pool: object pool + * + * Context: in task. + * Returns: object on success else an ERR_PTR + */ void *rxe_alloc(struct rxe_pool *pool) { struct rxe_pool_elem *elem; void *obj; - int err; + int err = -EINVAL; if (WARN_ON(!(pool->flags & RXE_POOL_ALLOC))) - return NULL; + goto err_out; + + if (WARN_ON(!in_task())) + goto err_out; if (atomic_inc_return(&pool->num_elem) > pool->max_elem) - goto err_cnt; + goto err_dec; obj = kzalloc(pool->elem_size, GFP_KERNEL); - if (!obj) - goto err_cnt; + if (!obj) { + err = -ENOMEM; + goto err_dec; + } elem = (struct rxe_pool_elem *)((u8 *)obj + pool->elem_offset); - elem->pool = pool; elem->obj = obj; kref_init(&elem->ref_cnt); @@ -154,20 +165,32 @@ void *rxe_alloc(struct rxe_pool *pool) err_free: kfree(obj); -err_cnt: +err_dec: atomic_dec(&pool->num_elem); - return NULL; +err_out: + return ERR_PTR(err); } +/** + * __rxe_add_to_pool - add rdma-core allocated object to rxe object pool + * @pool: object pool + * @elem: rxe_pool_elem embedded in object + * + * Context: in task. + * Returns: 0 on success else an error + */ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) { - int err; + int err = -EINVAL; if (WARN_ON(pool->flags & RXE_POOL_ALLOC)) - return -EINVAL; + goto err_out; + + if (WARN_ON(!in_task())) + goto err_out; if (atomic_inc_return(&pool->num_elem) > pool->max_elem) - goto err_cnt; + goto err_dec; elem->pool = pool; elem->obj = (u8 *)elem - pool->elem_offset; @@ -177,15 +200,23 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit, &pool->next, GFP_KERNEL); if (err) - goto err_cnt; + goto err_dec; return 0; -err_cnt: +err_dec: atomic_dec(&pool->num_elem); - return -EINVAL; +err_out: + return err; } +/** + * rxe_pool_get_index - find object in pool with given index + * @pool: object pool + * @index: index + * + * Returns: object on success else NULL + */ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) { struct rxe_pool_elem *elem; @@ -248,16 +279,32 @@ int __rxe_cleanup(struct rxe_pool_elem *elem) return err; } +/** + * __rxe_get - takes a ref on the object unless ref count is zero + * @elem: rxe_pool_elem embedded in object + * + * Returns: 1 if reference is added else 0 + */ int __rxe_get(struct rxe_pool_elem *elem) { return kref_get_unless_zero(&elem->ref_cnt); } +/** + * __rxe_put - puts a ref on the object + * @elem: rxe_pool_elem embedded in object + * + * Returns: 1 if ref count reaches zero and release called else 0 + */ int __rxe_put(struct rxe_pool_elem *elem) { return kref_put(&elem->ref_cnt, rxe_elem_release); } +/** + * __rxe_finalize - enable looking up object from index + * @elem: rxe_pool_elem embedded in object + */ void __rxe_finalize(struct rxe_pool_elem *elem) { struct xarray *xa = &elem->pool->xa; @@ -270,6 +317,10 @@ void __rxe_finalize(struct rxe_pool_elem *elem) WARN_ON(xa_err(ret)); } +/** + * __rxe_disable_lookup - disable looking up object from index + * @elem: rxe_pool_elem embedded in object + */ void __rxe_disable_lookup(struct rxe_pool_elem *elem) { struct xarray *xa = &elem->pool->xa;