From patchwork Mon Apr 5 05:24:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12182667 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85FCBC433B4 for ; Mon, 5 Apr 2021 05:24:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4B772613AA for ; Mon, 5 Apr 2021 05:24:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232187AbhDEFYr (ORCPT ); Mon, 5 Apr 2021 01:24:47 -0400 Received: from mail.kernel.org ([198.145.29.99]:57488 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232184AbhDEFYm (ORCPT ); Mon, 5 Apr 2021 01:24:42 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2F92E613A7; Mon, 5 Apr 2021 05:24:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617600276; bh=PAgj56Yc//zBEZolPkWhsSnGjEMS6kZSDqBlkURbZas=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AYxv5/O9LXkexYx1MQDU6ox6EdhzHB4uGJEl6xIqmgN3v+/+IXqbR/96mtUTQxa7L jSdhIGDVjcJBsXfrWfnoQ9hoWgB51uBDgqGKwBVjvHr7sKGRXuHD4KE5U5+mvekNf2 jgaH/6UhjlCgxX93pKq/a/2N39eIzYUIbsccbxUAttXvyG30WG82NYcoyLgMJp/QLy GIhiZMCeqTaqK2op2Fnk/dZyJf+fOiHxDpGjeTQwZtW7yad/MlAgSOwdOLXw6TYcP6 A0EKyLcC6rRPmGH9WfOryJ9ALoePf3A/Oe68jTK5RXqYw+Xr+hzYsSTH7CPeDp+ZOr uoDtLhQl6cu1Q== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Avihai Horon , Adit Ranadive , Anna Schumaker , Ariel Elior , Bart Van Assche , Bernard Metzler , Christoph Hellwig , Chuck Lever , "David S. Miller" , Dennis Dalessandro , Devesh Sharma , Faisal Latif , Jack Wang , Jakub Kicinski , "J. Bruce Fields" , Jens Axboe , Karsten Graul , Keith Busch , Lijun Ou , linux-cifs@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, Max Gurtovoy , Max Gurtovoy , "Md. Haris Iqbal" , Michael Guralnik , Michal Kalderon , Mike Marciniszyn , Naresh Kumar PBS , netdev@vger.kernel.org, Potnuri Bharat Teja , rds-devel@oss.oracle.com, Sagi Grimberg , samba-technical@lists.samba.org, Santosh Shilimkar , Selvin Xavier , Shiraz Saleem , Somnath Kotur , Sriharsha Basavapatna , Steve French , Trond Myklebust , VMware PV-Drivers , Weihang Li , Yishai Hadas , Zhu Yanjun Subject: [PATCH rdma-next 08/10] net/rds: Enable Relaxed Ordering Date: Mon, 5 Apr 2021 08:24:02 +0300 Message-Id: <20210405052404.213889-9-leon@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405052404.213889-1-leon@kernel.org> References: <20210405052404.213889-1-leon@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Avihai Horon Enable Relaxed Ordering for rds. Relaxed Ordering is an optional access flag and as such, it is ignored by vendors that don't support it. Signed-off-by: Avihai Horon Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- net/rds/ib_frmr.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/net/rds/ib_frmr.c b/net/rds/ib_frmr.c index 694eb916319e..1a60c2c90c78 100644 --- a/net/rds/ib_frmr.c +++ b/net/rds/ib_frmr.c @@ -76,7 +76,7 @@ static struct rds_ib_mr *rds_ib_alloc_frmr(struct rds_ib_device *rds_ibdev, frmr = &ibmr->u.frmr; frmr->mr = ib_alloc_mr(rds_ibdev->pd, IB_MR_TYPE_MEM_REG, - pool->max_pages, 0); + pool->max_pages, IB_ACCESS_RELAXED_ORDERING); if (IS_ERR(frmr->mr)) { pr_warn("RDS/IB: %s failed to allocate MR", __func__); err = PTR_ERR(frmr->mr); @@ -156,9 +156,8 @@ static int rds_ib_post_reg_frmr(struct rds_ib_mr *ibmr) reg_wr.wr.num_sge = 0; reg_wr.mr = frmr->mr; reg_wr.key = frmr->mr->rkey; - reg_wr.access = IB_ACCESS_LOCAL_WRITE | - IB_ACCESS_REMOTE_READ | - IB_ACCESS_REMOTE_WRITE; + reg_wr.access = IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_READ | + IB_ACCESS_REMOTE_WRITE | IB_ACCESS_RELAXED_ORDERING; reg_wr.wr.send_flags = IB_SEND_SIGNALED; ret = ib_post_send(ibmr->ic->i_cm_id->qp, ®_wr.wr, NULL);