From patchwork Mon Apr 5 05:24:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12182671 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 326F5C43460 for ; Mon, 5 Apr 2021 05:24:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 09D3F613A8 for ; Mon, 5 Apr 2021 05:24:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232184AbhDEFY4 (ORCPT ); Mon, 5 Apr 2021 01:24:56 -0400 Received: from mail.kernel.org ([198.145.29.99]:57808 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232296AbhDEFYt (ORCPT ); Mon, 5 Apr 2021 01:24:49 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 98D37613A5; Mon, 5 Apr 2021 05:24:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617600283; bh=dMWjVf2aUxdb7ZrnGVveey6nby3LeCk61fbfNTM9fl4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gTxgWIVArORnsJVcmrlN8d/R5yi5c6bOrNBzElohC5vaA3tthZxAmBXekzQxLPZt1 znzexk81BrDTA5rsNCCVYOvwwlFDnzS86wc5yRwGWJWRt2aJNEqlhMh8X3Yin5MnUg egYUxDlxiP5r1F4WaDRJYqPSPkYbHdcFvkPU39s/4JSgSFdlIFeQGgL/MD/zmAqfpV quoKIMb/Ra/cj5B4Sy7YeCi7JhHJGHVrBLPV05lfGEptIVl6j3TYWFL1RzMBwdrQlr KEGiPPxsxY/9/6wsL1nkOvjaFxpOYMEgu7CPBdfRPgPfo6Fuhy9FxKi4vnWqOGX83B 98mrg+Kp01FKw== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Avihai Horon , Adit Ranadive , Anna Schumaker , Ariel Elior , Bart Van Assche , Bernard Metzler , Christoph Hellwig , Chuck Lever , "David S. Miller" , Dennis Dalessandro , Devesh Sharma , Faisal Latif , Jack Wang , Jakub Kicinski , "J. Bruce Fields" , Jens Axboe , Karsten Graul , Keith Busch , Lijun Ou , linux-cifs@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, Max Gurtovoy , Max Gurtovoy , "Md. Haris Iqbal" , Michael Guralnik , Michal Kalderon , Mike Marciniszyn , Naresh Kumar PBS , netdev@vger.kernel.org, Potnuri Bharat Teja , rds-devel@oss.oracle.com, Sagi Grimberg , samba-technical@lists.samba.org, Santosh Shilimkar , Selvin Xavier , Shiraz Saleem , Somnath Kotur , Sriharsha Basavapatna , Steve French , Trond Myklebust , VMware PV-Drivers , Weihang Li , Yishai Hadas , Zhu Yanjun Subject: [PATCH rdma-next 10/10] xprtrdma: Enable Relaxed Ordering Date: Mon, 5 Apr 2021 08:24:04 +0300 Message-Id: <20210405052404.213889-11-leon@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405052404.213889-1-leon@kernel.org> References: <20210405052404.213889-1-leon@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Avihai Horon Enable Relaxed Ordering for xprtrdma. Relaxed Ordering is an optional access flag and as such, it is ignored by vendors that don't support it. Signed-off-by: Avihai Horon Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- net/sunrpc/xprtrdma/frwr_ops.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c index cfbdd197cdfe..f9334c0a1a13 100644 --- a/net/sunrpc/xprtrdma/frwr_ops.c +++ b/net/sunrpc/xprtrdma/frwr_ops.c @@ -135,7 +135,8 @@ int frwr_mr_init(struct rpcrdma_xprt *r_xprt, struct rpcrdma_mr *mr) struct ib_mr *frmr; int rc; - frmr = ib_alloc_mr(ep->re_pd, ep->re_mrtype, depth, 0); + frmr = ib_alloc_mr(ep->re_pd, ep->re_mrtype, depth, + IB_ACCESS_RELAXED_ORDERING); if (IS_ERR(frmr)) goto out_mr_err; @@ -339,9 +340,10 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt *r_xprt, reg_wr = &mr->frwr.fr_regwr; reg_wr->mr = ibmr; reg_wr->key = ibmr->rkey; - reg_wr->access = writing ? - IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE : - IB_ACCESS_REMOTE_READ; + reg_wr->access = + (writing ? IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE : + IB_ACCESS_REMOTE_READ) | + IB_ACCESS_RELAXED_ORDERING; mr->mr_handle = ibmr->rkey; mr->mr_length = ibmr->length;