From patchwork Mon Apr 5 05:24:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12182673 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71D3DC43460 for ; Mon, 5 Apr 2021 05:24:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 52E02613A8 for ; Mon, 5 Apr 2021 05:24:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232350AbhDEFY6 (ORCPT ); Mon, 5 Apr 2021 01:24:58 -0400 Received: from mail.kernel.org ([198.145.29.99]:57994 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232302AbhDEFYx (ORCPT ); Mon, 5 Apr 2021 01:24:53 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 27751613A0; Mon, 5 Apr 2021 05:24:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617600286; bh=qC4efJ6nsgEYXrpmkxKWy75LAL147M2IGYN5TPsAdII=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fWplzcsL9Z5M80eaItlaxJPIR8oDNUKVel7eVHXrIxgYhy15rmgiaRan2ANUF/xfS apgjMlinRcwdZm0uRUeyF/w4IyRoC8LiJdvurqBkHVTLLZHkorC1ppo+CRAOPQhuco hko2LOHnT6P1Cu75LPSLN6AjVOzu0jA/c2M6h8tw4toBn+o4y/+WJoCy91HO2D7mvd NAzKe3/TxiRAXu7DgOhWFA9vuqbpH3rhesPmiSiqOcI7iH/mrdhmzUMyzAgWVBGM3/ VuCoVYPSlpRIc2bqYFPLZamRFc55xmRnknWVB8LUXs8Muk5YNrVOlYzmxGXd+Mjwyu zd12jPeU6mFnw== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Avihai Horon , Adit Ranadive , Anna Schumaker , Ariel Elior , Bart Van Assche , Bernard Metzler , Christoph Hellwig , Chuck Lever , "David S. Miller" , Dennis Dalessandro , Devesh Sharma , Faisal Latif , Jack Wang , Jakub Kicinski , "J. Bruce Fields" , Jens Axboe , Karsten Graul , Keith Busch , Lijun Ou , linux-cifs@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, Max Gurtovoy , Max Gurtovoy , "Md. Haris Iqbal" , Michael Guralnik , Michal Kalderon , Mike Marciniszyn , Naresh Kumar PBS , netdev@vger.kernel.org, Potnuri Bharat Teja , rds-devel@oss.oracle.com, Sagi Grimberg , samba-technical@lists.samba.org, Santosh Shilimkar , Selvin Xavier , Shiraz Saleem , Somnath Kotur , Sriharsha Basavapatna , Steve French , Trond Myklebust , VMware PV-Drivers , Weihang Li , Yishai Hadas , Zhu Yanjun Subject: [PATCH rdma-next 09/10] net/smc: Enable Relaxed Ordering Date: Mon, 5 Apr 2021 08:24:03 +0300 Message-Id: <20210405052404.213889-10-leon@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405052404.213889-1-leon@kernel.org> References: <20210405052404.213889-1-leon@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Avihai Horon Enable Relaxed Ordering for smc. Relaxed Ordering is an optional access flag and as such, it is ignored by vendors that don't support it. Signed-off-by: Avihai Horon Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- net/smc/smc_ib.c | 3 ++- net/smc/smc_wr.c | 3 ++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/net/smc/smc_ib.c b/net/smc/smc_ib.c index 4e91ed3dc265..6b65c5d1f957 100644 --- a/net/smc/smc_ib.c +++ b/net/smc/smc_ib.c @@ -579,7 +579,8 @@ int smc_ib_get_memory_region(struct ib_pd *pd, int access_flags, return 0; /* already done */ buf_slot->mr_rx[link_idx] = - ib_alloc_mr(pd, IB_MR_TYPE_MEM_REG, 1 << buf_slot->order, 0); + ib_alloc_mr(pd, IB_MR_TYPE_MEM_REG, 1 << buf_slot->order, + IB_ACCESS_RELAXED_ORDERING); if (IS_ERR(buf_slot->mr_rx[link_idx])) { int rc; diff --git a/net/smc/smc_wr.c b/net/smc/smc_wr.c index cbc73a7e4d59..78e9650621f1 100644 --- a/net/smc/smc_wr.c +++ b/net/smc/smc_wr.c @@ -555,7 +555,8 @@ static void smc_wr_init_sge(struct smc_link *lnk) lnk->wr_reg.wr.num_sge = 0; lnk->wr_reg.wr.send_flags = IB_SEND_SIGNALED; lnk->wr_reg.wr.opcode = IB_WR_REG_MR; - lnk->wr_reg.access = IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_WRITE; + lnk->wr_reg.access = IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_WRITE | + IB_ACCESS_RELAXED_ORDERING; } void smc_wr_free_link(struct smc_link *lnk)