From patchwork Thu Dec 8 21:05:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13068903 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BB82C001B2 for ; Thu, 8 Dec 2022 21:06:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229554AbiLHVGS (ORCPT ); Thu, 8 Dec 2022 16:06:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55376 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229462AbiLHVGR (ORCPT ); Thu, 8 Dec 2022 16:06:17 -0500 Received: from mail-oa1-x2b.google.com (mail-oa1-x2b.google.com [IPv6:2001:4860:4864:20::2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A1C6511FC for ; Thu, 8 Dec 2022 13:06:17 -0800 (PST) Received: by mail-oa1-x2b.google.com with SMTP id 586e51a60fabf-1441d7d40c6so3300690fac.8 for ; Thu, 08 Dec 2022 13:06:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Qz5xyn+HHMsESLKATTNaUM3la32kwcEnj9qdTxWM0NI=; b=lWDOhWj1v2rZZNfOd3HDeSgrL9JigpzT1LnWEMJvvfSo+18DVykOW2k+UfYBhys0sR EC2BNWhNGQW3khdj7eoYNDLFdXCu//dC7vOb8I4M4YuUukG98l+aG+yum8uXlocMdX/7 +jshj4u+8P/QSl1Pk8GhY4PcMZIT9whO/NCU9x9U4TguRPiPdG2diZutla7JdM/ipqZc xcRT0NZE8vGhGUQukeEyZaF068mRU1XUHiVcSFvDJZE6pmUWgUMCpcV1KbvFGUit93ei Erj/v6xFeHdgfvWWJRMk5m5kArT8SwS+U8Nepys44E2sk+hf+5/f1c1e7JraPb8JCcvw llbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Qz5xyn+HHMsESLKATTNaUM3la32kwcEnj9qdTxWM0NI=; b=xl0IFnHwzu0REOMgZR9IjEs7kpLJfXt9cNkmQMsutIHPPIQIB/CN97As2MssDsdM+o 40a22oq4ERp5jBbrCfPEIbBrAx+AO/fYtDezKy5vLh+UcBeWxPg0IBnKiVXLfipqL7+N +Dmj/tFA8m4xcn0wcrXyPOPnQVbWGhfIjve33TiPeQxEYmu2fhO5UwnbJA6W98NFb0/4 wS2c7oDU+rSzUacFTziW9HxeNX+/36rbzF4bjieA50pq/9jDwPu4ToA26bHYToTThNqN WGkpvoptKKIqMyFQUOGv2ueHW6i8jU0q5MnuOlw1RLx8HGn9UnAV9U5C5LAqAhju880x zcuw== X-Gm-Message-State: ANoB5plO2C8Q46K1damYhuEqhc3Y28fvNJLTI29kx/MfS/IBqnFAxhwS EgvNRIhBN0Lw2cXA6LvMERg= X-Google-Smtp-Source: AA0mqf703yIHmNRAFBfp81hJ8S6uDRD2ipWAnAPoWJHxZt3u1xiQ5Vft0khIC3BdtN2uUkFkHw51VA== X-Received: by 2002:a05:6870:89a1:b0:13b:a8af:40e2 with SMTP id f33-20020a05687089a100b0013ba8af40e2mr1610157oaq.20.1670533576684; Thu, 08 Dec 2022 13:06:16 -0800 (PST) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-a13f-9234-e395-548a.res6.spectrum.com. [2603:8081:140c:1a00:a13f:9234:e395:548a]) by smtp.googlemail.com with ESMTPSA id t11-20020a056870e74b00b0012763819bcasm5739250oak.50.2022.12.08.13.06.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Dec 2022 13:06:16 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, leonro@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 1/6] RDMA/rxe: Cleanup mr_check_range Date: Thu, 8 Dec 2022 15:05:43 -0600 Message-Id: <20221208210547.28562-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221208210547.28562-1-rpearsonhpe@gmail.com> References: <20221208210547.28562-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Remove blank lines and replace EFAULT by EINVAL when an invalid mr type is used. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mr.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index b7c9ff1ddf0e..b007ff05baaf 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -24,8 +24,6 @@ u8 rxe_get_next_key(u32 last_key) int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length) { - - switch (mr->ibmr.type) { case IB_MR_TYPE_DMA: return 0; @@ -39,7 +37,7 @@ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length) default: rxe_dbg_mr(mr, "type (%d) not supported\n", mr->ibmr.type); - return -EFAULT; + return -EINVAL; } } From patchwork Thu Dec 8 21:05:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13068904 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23CC6C4167B for ; Thu, 8 Dec 2022 21:06:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229462AbiLHVGU (ORCPT ); Thu, 8 Dec 2022 16:06:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229783AbiLHVGT (ORCPT ); Thu, 8 Dec 2022 16:06:19 -0500 Received: from mail-oa1-x34.google.com (mail-oa1-x34.google.com [IPv6:2001:4860:4864:20::34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E42B554D2 for ; Thu, 8 Dec 2022 13:06:18 -0800 (PST) Received: by mail-oa1-x34.google.com with SMTP id 586e51a60fabf-142b72a728fso3287929fac.9 for ; Thu, 08 Dec 2022 13:06:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4az7TRteRxLRSLDJFsJTC0GgkozEL5OI5plF8WEz9DU=; b=IPGLVjETJ7Tprr940tp+Wz7trKUfqmKwFa9DoDl4v+/KGZd2dOFGS6PM1BB/22lWBC qTk+nytp/Sal8DK1z+ihRidhD6tBZweDMTqFUNq1G2K5IYQPqUjY8SsJQLIya3RjwXuz 8BsUl/k52ZmQxlZ7Nv7T2l3RPfZHrRAGRyXU+faxshNQ2Udj3eSf6ec+REKY3WRzfj0G Tuhyx9fq65GxOjZUBiPyWpGSJsQYDfWSKicHyX0YBpRIwuGnH3w7bqj7FAsw+3o9squi nFdYHGd+cng8y1xK8m4TojhkxqjCVYntY+ufNADXIljdcTu+8jRQMtSYwc3n6+S+PDnu fdMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4az7TRteRxLRSLDJFsJTC0GgkozEL5OI5plF8WEz9DU=; b=epfAfGgKA9rqHuIc47kYdg3rsZvda5iIMM/qztu8BOl8ZP4ImMFaL3b3vryMR4e7AF SJzq2JyAvD4Z0/Tz9cIzV0qrEVQ2lzhmVfo47G0OVMU4V7Nf7xU/QEGm13RCcs+ZjsJp sOQ0SMWiz60uaR7VaDG/c0ezMDoHUUuZx6yqvBPC1gjM/LeKfWOng18FMNKmS7+Ze4+H F7by4PYd6KIhqOv4tIsdZEfvy1K3z3Q5UvicpL0v+37qM+FuQgJDUZirPHMGQH4VOQq0 WWiycmOy5Fuexy0zJewk5MPovAyZ8Ie5tK/xAOkHpwWG4ELzYr1Vj4Sf3KMK6I+g3bXq dCwA== X-Gm-Message-State: ANoB5pnDUVFkM5lT9zhWI+ZJmc2wwzWmyOErZUGtl+MHAvlkbCLtK3bj HlfZU+xf0tyjCT4vwyATwoM= X-Google-Smtp-Source: AA0mqf5Pa2sNbvigZwQbEDKKGAL6oRwap1Ld/IB6l6IBXKO0tzJk0by8U/vbu2N2pYXMdk2f/1GCGw== X-Received: by 2002:a05:6870:514d:b0:143:fb49:6f6b with SMTP id z13-20020a056870514d00b00143fb496f6bmr1846426oak.59.1670533577616; Thu, 08 Dec 2022 13:06:17 -0800 (PST) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-a13f-9234-e395-548a.res6.spectrum.com. [2603:8081:140c:1a00:a13f:9234:e395:548a]) by smtp.googlemail.com with ESMTPSA id t11-20020a056870e74b00b0012763819bcasm5739250oak.50.2022.12.08.13.06.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Dec 2022 13:06:17 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, leonro@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 2/6] RDMA/rxe: Move rxe_map_mr_sg to rxe_mr.c Date: Thu, 8 Dec 2022 15:05:44 -0600 Message-Id: <20221208210547.28562-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221208210547.28562-1-rpearsonhpe@gmail.com> References: <20221208210547.28562-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move rxe_map_mr_sg() to rxe_mr.c where it makes a little more sense. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 2 ++ drivers/infiniband/sw/rxe/rxe_mr.c | 36 +++++++++++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_verbs.c | 36 --------------------------- 3 files changed, 38 insertions(+), 36 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index a22476d27b38..11106c181e74 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -68,6 +68,8 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, enum rxe_mr_copy_dir dir); int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, void *addr, int length, enum rxe_mr_copy_dir dir); +int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, + int sg_nents, unsigned int *sg_offset); void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length); struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index b007ff05baaf..0dabd3897028 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -202,6 +202,42 @@ int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr) return err; } +static int rxe_set_page(struct ib_mr *ibmr, u64 addr) +{ + struct rxe_mr *mr = to_rmr(ibmr); + struct rxe_map *map; + struct rxe_phys_buf *buf; + + if (unlikely(mr->nbuf == mr->num_buf)) + return -ENOMEM; + + map = mr->map[mr->nbuf / RXE_BUF_PER_MAP]; + buf = &map->buf[mr->nbuf % RXE_BUF_PER_MAP]; + + buf->addr = addr; + buf->size = ibmr->page_size; + mr->nbuf++; + + return 0; +} + +int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, + int sg_nents, unsigned int *sg_offset) +{ + struct rxe_mr *mr = to_rmr(ibmr); + int n; + + mr->nbuf = 0; + + n = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, rxe_set_page); + + mr->page_shift = ilog2(ibmr->page_size); + mr->page_mask = ibmr->page_size - 1; + mr->offset = ibmr->iova & mr->page_mask; + + return n; +} + static void lookup_iova(struct rxe_mr *mr, u64 iova, int *m_out, int *n_out, size_t *offset_out) { diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 025b35bf014e..7a902e0a0607 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -948,42 +948,6 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, return ERR_PTR(err); } -static int rxe_set_page(struct ib_mr *ibmr, u64 addr) -{ - struct rxe_mr *mr = to_rmr(ibmr); - struct rxe_map *map; - struct rxe_phys_buf *buf; - - if (unlikely(mr->nbuf == mr->num_buf)) - return -ENOMEM; - - map = mr->map[mr->nbuf / RXE_BUF_PER_MAP]; - buf = &map->buf[mr->nbuf % RXE_BUF_PER_MAP]; - - buf->addr = addr; - buf->size = ibmr->page_size; - mr->nbuf++; - - return 0; -} - -static int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, - int sg_nents, unsigned int *sg_offset) -{ - struct rxe_mr *mr = to_rmr(ibmr); - int n; - - mr->nbuf = 0; - - n = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, rxe_set_page); - - mr->page_shift = ilog2(ibmr->page_size); - mr->page_mask = ibmr->page_size - 1; - mr->offset = ibmr->iova & mr->page_mask; - - return n; -} - static ssize_t parent_show(struct device *device, struct device_attribute *attr, char *buf) { From patchwork Thu Dec 8 21:05:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13068905 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EF78C4332F for ; Thu, 8 Dec 2022 21:06:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229795AbiLHVGX (ORCPT ); Thu, 8 Dec 2022 16:06:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229798AbiLHVGV (ORCPT ); Thu, 8 Dec 2022 16:06:21 -0500 Received: from mail-oa1-x2f.google.com (mail-oa1-x2f.google.com [IPv6:2001:4860:4864:20::2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4645A55CAB for ; Thu, 8 Dec 2022 13:06:19 -0800 (PST) Received: by mail-oa1-x2f.google.com with SMTP id 586e51a60fabf-143ffc8c2b2so3319820fac.2 for ; Thu, 08 Dec 2022 13:06:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4OJYdjDARobolHECRntd+ljS9KueGSdGES7AKt0DFFQ=; b=OI7Agi5IQBR7DLZZk0iyzCHi0ebiaFBsMTba6TXOLPlgzE9/rj9Lzeq1xGNkLCr5KO NkgswnMJm28PEtZziWDglOiyWtyqn43taRQjesA0HE9QG7uVzJ4gm8FFlUDhvls9GU5Q /hYiuzCMc3aMzhHF1b3NqdokYXAnxfYhRrqz61Cqi9BRhIlsSjKrbNEYMvfj9srokB9x miG4T376nKinkvNoWTVyzZrQke+pDMl04K3F3H8an1BFT/Z2UqRSeBdcdsvLp4nHCW8m VD13xOufx/ReCmFW9aZfYIrA0NN0zTZ5glEp+QyhJZgPPkuMe42Oe7pmEmRk/B4YRKga pMeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4OJYdjDARobolHECRntd+ljS9KueGSdGES7AKt0DFFQ=; b=w3nmjRUBZBAoizxy5fzotpqee0DNgbkcmXH65WcyhMmM73vIvN5jVq9MEPxjJ549K/ CJWdwMuK/dz6s+87e8C7JjYJtLIZRgicnCth9UiwghoPfQCzPNlzGRdrUfewZQDiWQwM 6ZQoc2hh+zrb/vQv9XlE83EqC4a/9npkx7TAfFFMmsrsRxmxUcXzaYdF8E8NNOyhe1zD 6WE1Z7cDUkMmwMq5H3tVBOuDu8q5LiAPS/J1xz29XwLPegcIRReqaigdcEIhzNuvWHy6 BbvjJ0B8n5eQdIg4a1bNamTxa8KNI3+NmYuIDOsLEwH0s1FylmW2xPbgWyJy3jget/z+ ynjA== X-Gm-Message-State: ANoB5pkjYM+/T1x79QFfA8crzk21PJDZeRhFLCqBqgH4OVsxlEeyMKXH AkD5k2cIamQs8ocAnDv1/QAZOwF4Qek= X-Google-Smtp-Source: AA0mqf4Fs7MvvJTB4vUPkM6EUVTBQ7okMKNvMv9eDwqbj/jpSLoAi7uPVl/PtvW7A5yE7LOufcAEkQ== X-Received: by 2002:a05:6871:420a:b0:144:935c:6333 with SMTP id li10-20020a056871420a00b00144935c6333mr2159834oab.6.1670533578608; Thu, 08 Dec 2022 13:06:18 -0800 (PST) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-a13f-9234-e395-548a.res6.spectrum.com. [2603:8081:140c:1a00:a13f:9234:e395:548a]) by smtp.googlemail.com with ESMTPSA id t11-20020a056870e74b00b0012763819bcasm5739250oak.50.2022.12.08.13.06.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Dec 2022 13:06:18 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, leonro@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 3/6] RDMA-rxe: Isolate mr code from atomic_reply() Date: Thu, 8 Dec 2022 15:05:45 -0600 Message-Id: <20221208210547.28562-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221208210547.28562-1-rpearsonhpe@gmail.com> References: <20221208210547.28562-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Isolate mr specific code from atomic_reply() in rxe_resp.c into a subroutine rxe_mr_do_atomic_op() in rxe_mr.c. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 2 ++ drivers/infiniband/sw/rxe/rxe_mr.c | 30 ++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_resp.c | 47 +++++++--------------------- 3 files changed, 44 insertions(+), 35 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 11106c181e74..b14607bb54b1 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -71,6 +71,8 @@ int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset); void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length); +int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode, + u64 compare, u64 swap_add, u64 *orig_val); struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type); int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 0dabd3897028..ec3c8e6e8318 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -484,6 +484,36 @@ int copy_data( return err; } +static DEFINE_SPINLOCK(atomic_ops_lock); + +int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode, + u64 compare, u64 swap_add, u64 *orig_val) +{ + u64 *vaddr = iova_to_vaddr(mr, iova, sizeof(u64)); + u64 value; + + /* needs to match rxe_resp.c */ + if (mr->state != RXE_MR_STATE_VALID || !vaddr) + return -EFAULT; + if (vaddr & 7) + return -EINVAL; + + spin_lock_bh(&atomic_ops_lock); + value = *orig_val = *vaddr; + + if (opcode == IB_OPCODE_RC_COMPARE_SWAP) { + if (value == compare) + value = swap_add; + } else { + value += swap_add; + } + + *vaddr = value; + spin_unlock_bh(&atomic_ops_lock); + + return 0; +} + int advance_dma_data(struct rxe_dma_info *dma, unsigned int length) { struct rxe_sge *sge = &dma->sge[dma->cur_sge]; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 6ac544477f3f..17192e768a2d 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -616,17 +616,12 @@ static struct resp_res *rxe_prepare_res(struct rxe_qp *qp, return res; } -/* Guarantee atomicity of atomic operations at the machine level. */ -static DEFINE_SPINLOCK(atomic_ops_lock); - static enum resp_states atomic_reply(struct rxe_qp *qp, - struct rxe_pkt_info *pkt) + struct rxe_pkt_info *pkt) { - u64 *vaddr; - enum resp_states ret; struct rxe_mr *mr = qp->resp.mr; struct resp_res *res = qp->resp.res; - u64 value; + int err; if (!res) { res = rxe_prepare_res(qp, pkt, RXE_ATOMIC_MASK); @@ -634,32 +629,16 @@ static enum resp_states atomic_reply(struct rxe_qp *qp, } if (!res->replay) { - if (mr->state != RXE_MR_STATE_VALID) { - ret = RESPST_ERR_RKEY_VIOLATION; - goto out; - } - - vaddr = iova_to_vaddr(mr, qp->resp.va + qp->resp.offset, - sizeof(u64)); - - /* check vaddr is 8 bytes aligned. */ - if (!vaddr || (uintptr_t)vaddr & 7) { - ret = RESPST_ERR_MISALIGNED_ATOMIC; - goto out; - } + u64 iova = qp->resp.va + qp->resp.offset; - spin_lock_bh(&atomic_ops_lock); - res->atomic.orig_val = value = *vaddr; - - if (pkt->opcode == IB_OPCODE_RC_COMPARE_SWAP) { - if (value == atmeth_comp(pkt)) - value = atmeth_swap_add(pkt); - } else { - value += atmeth_swap_add(pkt); - } - - *vaddr = value; - spin_unlock_bh(&atomic_ops_lock); + err = rxe_mr_do_atomic_op(mr, iova, pkt->opcode, + atmeth_comp(pkt), + atmeth_swap_add(pkt), + &res->atomic.orig_val); + if (err == -EINVAL) + return RESPST_ERR_MISALIGNED_ATOMIC; + else if (err) + return RESPST_ERR_RKEY_VIOLATION; qp->resp.msn++; @@ -671,9 +650,7 @@ static enum resp_states atomic_reply(struct rxe_qp *qp, qp->resp.status = IB_WC_SUCCESS; } - ret = RESPST_ACKNOWLEDGE; -out: - return ret; + return RESPST_ACKNOWLEDGE; } static enum resp_states atomic_write_reply(struct rxe_qp *qp, From patchwork Thu Dec 8 21:05:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13068906 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2CDEC001B2 for ; Thu, 8 Dec 2022 21:06:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229798AbiLHVGY (ORCPT ); Thu, 8 Dec 2022 16:06:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229777AbiLHVGW (ORCPT ); Thu, 8 Dec 2022 16:06:22 -0500 Received: from mail-oa1-x30.google.com (mail-oa1-x30.google.com [IPv6:2001:4860:4864:20::30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B6C6E57B67 for ; Thu, 8 Dec 2022 13:06:19 -0800 (PST) Received: by mail-oa1-x30.google.com with SMTP id 586e51a60fabf-1443a16b71cso3269093fac.13 for ; Thu, 08 Dec 2022 13:06:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=k1pUMytqhtBY/jTwSRDI2weK0gGjSheYgKwOWiZQ8C0=; b=hKX12Oa58Uiu84ehj0BDVP3a1PUeplg8+AHei66OCD+AA/xLfwH/9vC0NP/3+0umDS edru6Gkfqx9hEg2k3SmTizURxYVvD5lxCOKSQ9qvUn2PbsuFJ4JHg/R9/3zzzYtkq3Qt iDog42n4swZqYKeBPhkY5kke0S/EhOThRoICctT5XvdnUGuCtQE6xoewyjtZLnrdMdXN FmL7G8oK4Tk38V1dB0b7GPJ6MUdPnXXvPWLx2S5LIjr5MOnYBH4eMTmRWAKqPQhWb4nY oUxiJNqQlPwFdFDJA1Fo52D8vCNCsGl/7tUNz2K7ek0XR3tr5DZtSCwhsVHMgttmnaSI WmfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=k1pUMytqhtBY/jTwSRDI2weK0gGjSheYgKwOWiZQ8C0=; b=Bajdlp7z5pdyE1KRnX4KQhPVUcB80H3i/UbT+ba1Djspm0YVC1MiI/HzJFQ67DGiZ3 sDOjtldhHf6M7PMCNXxwRo8UQ5aI596Um8viFNGZkANDye9yqIKhkdXi8Wd0mIU+DrOm 4dwZQqipuYOwc2tSOS4diSmBnn4tKAXQCagXoLXdIw6ZiJYc0pTj2or/grpZ2Btnz2hl zvGfjuh5BeiIUB6CgFeflr11Tv7gZI0evtxn6uVjwtyBpulWpOT70gCI8ockzuLJDZIC sMRzdm72457OUW8vfRmksI2KAX9uM71l8PWtsJIjKSsd87Srj2EhOeN4H5L11jpo49ug iZnQ== X-Gm-Message-State: ANoB5pkirN+cW2NAYp8Mg8wJUZQ73BeEbWExxyZ1qnQOEXMyU0ZUZ+6E q246xBYI1CkjvJNBlozXu/LZ64iRDfs= X-Google-Smtp-Source: AA0mqf7rceE4BviQJ775hYbJtY0JKg3IyB3qNOvYWJh0X3p7KwOx6XoGX9934UEozqsZ3l01KbQHQA== X-Received: by 2002:a05:6870:1ece:b0:144:7a86:ae39 with SMTP id pc14-20020a0568701ece00b001447a86ae39mr1930789oab.7.1670533579401; Thu, 08 Dec 2022 13:06:19 -0800 (PST) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-a13f-9234-e395-548a.res6.spectrum.com. [2603:8081:140c:1a00:a13f:9234:e395:548a]) by smtp.googlemail.com with ESMTPSA id t11-20020a056870e74b00b0012763819bcasm5739250oak.50.2022.12.08.13.06.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Dec 2022 13:06:18 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, leonro@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 4/6] RDMA-rxe: Isolate mr code from atomic_write_reply() Date: Thu, 8 Dec 2022 15:05:46 -0600 Message-Id: <20221208210547.28562-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221208210547.28562-1-rpearsonhpe@gmail.com> References: <20221208210547.28562-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Isolate mr specific code from atomic_write_reply() in rxe_resp.c into a subroutine rxe_mr_do_atomic_write() in rxe_mr.c. Check length for atomic write operation. Make iova_to_vaddr() static. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 1 + drivers/infiniband/sw/rxe/rxe_mr.c | 41 +++++++++++++++++++++++++++- drivers/infiniband/sw/rxe/rxe_resp.c | 25 ++++++----------- 3 files changed, 50 insertions(+), 17 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index b14607bb54b1..e1bb977cdbc0 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -73,6 +73,7 @@ int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length); int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode, u64 compare, u64 swap_add, u64 *orig_val); +int rxe_mr_do_atomic_write(struct rxe_mr *mr, u64 iova, void *addr); struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type); int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index ec3c8e6e8318..c6aa86d28b89 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -495,7 +495,7 @@ int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode, /* needs to match rxe_resp.c */ if (mr->state != RXE_MR_STATE_VALID || !vaddr) return -EFAULT; - if (vaddr & 7) + if ((uintptr_t)vaddr & 7) return -EINVAL; spin_lock_bh(&atomic_ops_lock); @@ -514,6 +514,45 @@ int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode, return 0; } +/* + * Returns: + * 0 on success + * -1 for misaligned address + * -2 for access errors + */ +int rxe_mr_do_atomic_write(struct rxe_mr *mr, u64 iova, void *addr) +{ + u64 *vaddr; + u64 value; + unsigned int length = 8; + + /* See IBA oA19-28 */ + if (unlikely(mr->state != RXE_MR_STATE_VALID)) { + rxe_dbg_mr(mr, "mr not valid"); + return -2; + } + + /* See IBA A19.4.2 */ + if (unlikely((uintptr_t)vaddr & 0x7 || iova & 0x7)) { + rxe_dbg_mr(mr, "misaligned address"); + return -1; + } + + vaddr = iova_to_vaddr(mr, iova, length); + if (unlikely(!vaddr)) { + rxe_dbg_mr(mr, "iova out of range"); + return -2; + } + + /* this makes no sense. What of payload is not 8? */ + memcpy(&value, addr, length); + + /* Do atomic write after all prior operations have completed */ + smp_store_release(vaddr, value); + + return 0; +} + int advance_dma_data(struct rxe_dma_info *dma, unsigned int length) { struct rxe_sge *sge = &dma->sge[dma->cur_sge]; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 17192e768a2d..abe9cfd935c2 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -654,12 +654,12 @@ static enum resp_states atomic_reply(struct rxe_qp *qp, } static enum resp_states atomic_write_reply(struct rxe_qp *qp, - struct rxe_pkt_info *pkt) + struct rxe_pkt_info *pkt) { - u64 src, *dst; struct resp_res *res = qp->resp.res; struct rxe_mr *mr = qp->resp.mr; - int payload = payload_size(pkt); + void *addr = payload_addr(pkt); + int err; if (!res) { res = rxe_prepare_res(qp, pkt, RXE_ATOMIC_WRITE_MASK); @@ -668,22 +668,15 @@ static enum resp_states atomic_write_reply(struct rxe_qp *qp, if (!res->replay) { #ifdef CONFIG_64BIT - if (mr->state != RXE_MR_STATE_VALID) - return RESPST_ERR_RKEY_VIOLATION; - - memcpy(&src, payload_addr(pkt), payload); + u64 iova = qp->resp.va + qp->resp.offset; - dst = iova_to_vaddr(mr, qp->resp.va + qp->resp.offset, payload); - /* check vaddr is 8 bytes aligned. */ - if (!dst || (uintptr_t)dst & 7) + err = rxe_mr_do_atomic_write(mr, iova, addr); + if (err == -1) return RESPST_ERR_MISALIGNED_ATOMIC; + else + return RESPST_ERR_RKEY_VIOLATION; - /* Do atomic write after all prior operations have completed */ - smp_store_release(dst, src); - - /* decrease resp.resid to zero */ - qp->resp.resid -= sizeof(payload); - + qp->resp.resid -= 8; qp->resp.msn++; /* next expected psn, read handles this separately */ From patchwork Thu Dec 8 21:05:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13068907 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B88D9C4332F for ; Thu, 8 Dec 2022 21:06:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229739AbiLHVG0 (ORCPT ); Thu, 8 Dec 2022 16:06:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229876AbiLHVGX (ORCPT ); Thu, 8 Dec 2022 16:06:23 -0500 Received: from mail-oa1-x2a.google.com (mail-oa1-x2a.google.com [IPv6:2001:4860:4864:20::2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4CC9786BC for ; Thu, 8 Dec 2022 13:06:20 -0800 (PST) Received: by mail-oa1-x2a.google.com with SMTP id 586e51a60fabf-1433ef3b61fso3284868fac.10 for ; Thu, 08 Dec 2022 13:06:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qvnJoTf0O/tnyT/nf5vfo5SJGf0Ff/gwGcFhvP0HgKI=; b=G21lQ/aPfLG8fGpwi7re4LmM1UbwJAxlVM7KLAKoDVAAX4b5h9hvA9ke7HfqgX7BEa q+cja6ugbu+55sKNwYltmR99Hy/lzm3JlIfyGyL+0laD4/i+vujbo+ENXfUogei/O/TF kJbU6H1Nym9u7ZPB1mynshKHZ/pbWuaNtHtRfwWwCXEFluJ7DnIEw5+/d4WrPNU7LBhC nCDe2tTigWGCJJJYW//YScYMGQ1x5O8UF7y9SQ14daaST9ZfowTMddxgqk+hvwfL/Rvb D6lKFVpY9be4syU54Ein0VhHUlDANCLfR+yCk74hR44O47hKP3bpH1QNV7jlx3hGqPe0 Qi+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qvnJoTf0O/tnyT/nf5vfo5SJGf0Ff/gwGcFhvP0HgKI=; b=A7bmBbhzWpl8VXdg/DFe6HDRN0Ikvz6oSnPpbOoW9LM5xJR+zzt6B+Fzk9C3IE1HqQ 9qjMkjn0cAAcoUePTBRuEPWe9l8OB50N6J9rlbbcSAHDMWVa0jTCVo0MrlWhRrD/MgqT o/FR/qgRC2NQd2fxB+5BJvuDYEvJXrkvvSCzvHgXz1p161NliCghQLLsaMBusth8s488 UcG3z6eQg1QGK8YXCWP5AStlk8VrRlkgIJnKkyItqcigIBFULV/7DJPZTDChNPHlGz9O aiF38uDndiHemm5gbmGDvjEEHnuRshqsHqCVZdppDZlKG4CQtYzoSgwoXvy031ek/2g7 PI2w== X-Gm-Message-State: ANoB5pk3X6u9xDGsQr8QKq6/vXd25DwPKqPJON8bTOjiD8GBdyOSPz6O QXvzAM2cP48JUKcwMHZaCFoiuZp3YVE= X-Google-Smtp-Source: AA0mqf5UJtuw1UDQPZKcgskhaU0uMKkGQq+OQkLq/eKV7yF8samg0WVFAntBBlGttGXpmWOuzwx05Q== X-Received: by 2002:a05:6870:9f13:b0:143:91f5:65b2 with SMTP id xl19-20020a0568709f1300b0014391f565b2mr1601730oab.52.1670533580270; Thu, 08 Dec 2022 13:06:20 -0800 (PST) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-a13f-9234-e395-548a.res6.spectrum.com. [2603:8081:140c:1a00:a13f:9234:e395:548a]) by smtp.googlemail.com with ESMTPSA id t11-20020a056870e74b00b0012763819bcasm5739250oak.50.2022.12.08.13.06.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Dec 2022 13:06:19 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, leonro@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 5/6] RDMA/rxe: Cleanup page variables in rxe_mr.c Date: Thu, 8 Dec 2022 15:05:47 -0600 Message-Id: <20221208210547.28562-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221208210547.28562-1-rpearsonhpe@gmail.com> References: <20221208210547.28562-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Cleanup usage of mr->page_shift and mr->page_mask and introduce an extractor for mr->ibmr.page_size. Normal usage in the kernel has page_mask masking out offset in page rather than masking out the page number. The rxe driver had reversed that which was confusing. Implicitly there can be a per mr page_size which was not uniformly supported. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mr.c | 28 +++++++++++++-------------- drivers/infiniband/sw/rxe/rxe_verbs.h | 11 ++++++++--- 2 files changed, 21 insertions(+), 18 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index c6aa86d28b89..2b600557fbfa 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -58,6 +58,9 @@ static void rxe_mr_init(int access, struct rxe_mr *mr) mr->lkey = mr->ibmr.lkey = lkey; mr->rkey = mr->ibmr.rkey = rkey; + mr->ibmr.page_size = PAGE_SIZE; + mr->page_mask = PAGE_MASK; + mr->page_shift = PAGE_SHIFT; mr->state = RXE_MR_STATE_INVALID; } @@ -138,9 +141,6 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, goto err_release_umem; } - mr->page_shift = PAGE_SHIFT; - mr->page_mask = PAGE_SIZE - 1; - num_buf = 0; map = mr->map; if (length > 0) { @@ -160,7 +160,7 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, goto err_release_umem; } buf->addr = (uintptr_t)vaddr; - buf->size = PAGE_SIZE; + buf->size = mr_page_size(mr); num_buf++; buf++; @@ -169,7 +169,7 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, mr->umem = umem; mr->access = access; - mr->offset = ib_umem_offset(umem); + mr->page_offset = ib_umem_offset(umem); mr->state = RXE_MR_STATE_VALID; mr->ibmr.type = IB_MR_TYPE_USER; @@ -225,29 +225,27 @@ int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset) { struct rxe_mr *mr = to_rmr(ibmr); - int n; + unsigned int page_size = mr_page_size(mr); - mr->nbuf = 0; - - n = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, rxe_set_page); + mr->page_shift = ilog2(page_size); + mr->page_mask = ~((u64)page_size - 1); + mr->page_offset = ibmr->iova & (page_size - 1); - mr->page_shift = ilog2(ibmr->page_size); - mr->page_mask = ibmr->page_size - 1; - mr->offset = ibmr->iova & mr->page_mask; + mr->nbuf = 0; - return n; + return ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, rxe_set_page); } static void lookup_iova(struct rxe_mr *mr, u64 iova, int *m_out, int *n_out, size_t *offset_out) { - size_t offset = iova - mr->ibmr.iova + mr->offset; + size_t offset = iova - mr->ibmr.iova + mr->page_offset; int map_index; int buf_index; u64 length; if (likely(mr->page_shift)) { - *offset_out = offset & mr->page_mask; + *offset_out = offset & (mr_page_size(mr) - 1); offset >>= mr->page_shift; *n_out = offset & mr->map_mask; *m_out = offset >> mr->map_shift; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 22a299b0a9f0..ddaa4de5e1c7 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -304,11 +304,11 @@ struct rxe_mr { u32 lkey; u32 rkey; enum rxe_mr_state state; - u32 offset; int access; - int page_shift; - int page_mask; + unsigned int page_offset; + unsigned int page_shift; + u64 page_mask; int map_shift; int map_mask; @@ -323,6 +323,11 @@ struct rxe_mr { struct rxe_map **map; }; +static inline unsigned int mr_page_size(struct rxe_mr *mr) +{ + return mr ? mr->ibmr.page_size : PAGE_SIZE; +} + enum rxe_mw_state { RXE_MW_STATE_INVALID = RXE_MR_STATE_INVALID, RXE_MW_STATE_FREE = RXE_MR_STATE_FREE, From patchwork Thu Dec 8 21:05:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13068908 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAEDDC4167B for ; Thu, 8 Dec 2022 21:06:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229514AbiLHVG1 (ORCPT ); Thu, 8 Dec 2022 16:06:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229891AbiLHVGY (ORCPT ); Thu, 8 Dec 2022 16:06:24 -0500 Received: from mail-oa1-x34.google.com (mail-oa1-x34.google.com [IPv6:2001:4860:4864:20::34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8AAC5511ED for ; Thu, 8 Dec 2022 13:06:21 -0800 (PST) Received: by mail-oa1-x34.google.com with SMTP id 586e51a60fabf-142b72a728fso3288113fac.9 for ; Thu, 08 Dec 2022 13:06:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5QGWNnB4Y2Ypdb5OkLRqHpl8ZsxMrygnZdRH+R5rs1A=; b=os/bdofcbgnRzhz7p1vNRNXnCBs0MGBvF1kUi5YbBERLh9J2SSKpbakik8ZaJOoCKG /MaKH7s66BuohddiQshC6tIVS3jW2rtTpoOyAcYNmwVZYt55LFxv99YLkPgZzl761Vsz Nlgvnel1eZJal89UrjZ3yS1WpQRb/SKtDOeWWKCIr41FckB7f8aFxzY0zxYIvL4l7iaH WCNIAZME5fTeN9YuDX1Xmx5TOzs/uZuzShLxynutbxodc4bu14TZhWsDJLehNBY2KXxR 7PxoD+iTvOnNGJtbkTUYOiyk1l7b1o/gqngNJezia7gA3kwmW73jruE/5gw7yyLs2GEq BQZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5QGWNnB4Y2Ypdb5OkLRqHpl8ZsxMrygnZdRH+R5rs1A=; b=L1iZK2eEbdwqyjUwl67RGIJlgpYlc1S1iyedVnYeHGDJ6Cbn3b+2pb6AThZFnHJZQk 7GBq4JVa1klio8352UZQEcPt/O6aFfIvM8Zej+AbNwfOLr2+MepCS7RFquKlYs0UO2MA Xrnjxktgq8BAr2p7zNq39W6gPfZlNtX4kgARlyL7bg3+rKyRrT9+qgxYRI4mgjL9M3Ns elIEOcgVZj4N4XJALr84hRX77kqJXIL+T/X7rngWkKvIBGSx/QhzaoTuWQzMb4pZTN8O U3xDA6kmTHoHCRqJpdUsmMkyXQJEPtmPo5U1jTZj+O1wG9JfMGWBjcY/ILhajvelGxS1 4y+Q== X-Gm-Message-State: ANoB5pk4XwS7ZO7DoRGKQWyWnFtS0FI/1o/46tm/WuHFY247/y5Ol8IM 3zKpKbv4DFU/RTw3SgSlAoEAWHKV+Zs= X-Google-Smtp-Source: AA0mqf4mNADsYcd3GLMSAxgrWa0rohxfU6Hf8PsHz+AsxELxadjg+YDpIXh3yaoONGC+Yx75pCM2Sw== X-Received: by 2002:a05:6870:e9a4:b0:144:7a85:63ce with SMTP id r36-20020a056870e9a400b001447a8563cemr1933236oao.54.1670533581132; Thu, 08 Dec 2022 13:06:21 -0800 (PST) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-a13f-9234-e395-548a.res6.spectrum.com. [2603:8081:140c:1a00:a13f:9234:e395:548a]) by smtp.googlemail.com with ESMTPSA id t11-20020a056870e74b00b0012763819bcasm5739250oak.50.2022.12.08.13.06.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Dec 2022 13:06:20 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, leonro@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 6/6] RDMA/rxe: Replace rxe_map and rxe_phys_buf by xarray Date: Thu, 8 Dec 2022 15:05:48 -0600 Message-Id: <20221208210547.28562-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221208210547.28562-1-rpearsonhpe@gmail.com> References: <20221208210547.28562-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Replace struct rxe-phys_buf and struct rxe_map by struct xarray in rxe_verbs.h. This allows using rcu locking on reads for the memory maps stored in each mr. This is based off of a sketch of a patch from Jason Gunthorpe in the link below. Some changes were needed to make this work. It applies cleanly to the current for-next and passes the pyverbs, perftest and the same blktests test cases which run today. Link: https://lore.kernel.org/linux-rdma/Y3gvZr6%2FNCii9Avy@nvidia.com/ Co-developed-by: Jason Gunthorpe Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 1 - drivers/infiniband/sw/rxe/rxe_mr.c | 481 ++++++++++++-------------- drivers/infiniband/sw/rxe/rxe_resp.c | 5 +- drivers/infiniband/sw/rxe/rxe_verbs.h | 21 +- 4 files changed, 226 insertions(+), 282 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index e1bb977cdbc0..e5b6582e6b96 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -70,7 +70,6 @@ int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, void *addr, int length, enum rxe_mr_copy_dir dir); int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset); -void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length); int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode, u64 compare, u64 swap_add, u64 *orig_val); int rxe_mr_do_atomic_write(struct rxe_mr *mr, u64 iova, void *addr); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 2b600557fbfa..d8eaceca1b62 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -58,127 +58,108 @@ static void rxe_mr_init(int access, struct rxe_mr *mr) mr->lkey = mr->ibmr.lkey = lkey; mr->rkey = mr->ibmr.rkey = rkey; + mr->access = access; mr->ibmr.page_size = PAGE_SIZE; mr->page_mask = PAGE_MASK; mr->page_shift = PAGE_SHIFT; mr->state = RXE_MR_STATE_INVALID; } -static int rxe_mr_alloc(struct rxe_mr *mr, int num_buf) +void rxe_mr_init_dma(int access, struct rxe_mr *mr) { - int i; - int num_map; - struct rxe_map **map = mr->map; - - num_map = (num_buf + RXE_BUF_PER_MAP - 1) / RXE_BUF_PER_MAP; - - mr->map = kmalloc_array(num_map, sizeof(*map), GFP_KERNEL); - if (!mr->map) - goto err1; - - for (i = 0; i < num_map; i++) { - mr->map[i] = kmalloc(sizeof(**map), GFP_KERNEL); - if (!mr->map[i]) - goto err2; - } - - BUILD_BUG_ON(!is_power_of_2(RXE_BUF_PER_MAP)); + rxe_mr_init(access, mr); - mr->map_shift = ilog2(RXE_BUF_PER_MAP); - mr->map_mask = RXE_BUF_PER_MAP - 1; + mr->state = RXE_MR_STATE_VALID; + mr->ibmr.type = IB_MR_TYPE_DMA; +} - mr->num_buf = num_buf; - mr->num_map = num_map; - mr->max_buf = num_map * RXE_BUF_PER_MAP; +static unsigned long rxe_mr_iova_to_index(struct rxe_mr *mr, u64 iova) +{ + return (iova >> mr->page_shift) - (mr->ibmr.iova >> mr->page_shift); +} - return 0; +static int rxe_mr_fill_pages_from_sgt(struct rxe_mr *mr, struct sg_table *sgt) +{ + XA_STATE(xas, &mr->pages, 0); + struct sg_page_iter sg_iter; -err2: - for (i--; i >= 0; i--) - kfree(mr->map[i]); + xa_init(&mr->pages); - kfree(mr->map); - mr->map = NULL; -err1: - return -ENOMEM; -} + __sg_page_iter_start(&sg_iter, sgt->sgl, sgt->orig_nents, 0); + if (!__sg_page_iter_next(&sg_iter)) + return 0; -void rxe_mr_init_dma(int access, struct rxe_mr *mr) -{ - rxe_mr_init(access, mr); + do { + xas_lock(&xas); + while (true) { + xas_store(&xas, sg_page_iter_page(&sg_iter)); + if (xas_error(&xas)) + break; + xas_next(&xas); + if (!__sg_page_iter_next(&sg_iter)) + break; + } + xas_unlock(&xas); + } while (xas_nomem(&xas, GFP_KERNEL)); - mr->access = access; - mr->state = RXE_MR_STATE_VALID; - mr->ibmr.type = IB_MR_TYPE_DMA; + return xas_error(&xas); } int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, int access, struct rxe_mr *mr) { - struct rxe_map **map; - struct rxe_phys_buf *buf = NULL; - struct ib_umem *umem; - struct sg_page_iter sg_iter; - int num_buf; - void *vaddr; + struct ib_umem *umem; int err; + rxe_mr_init(access, mr); + umem = ib_umem_get(&rxe->ib_dev, start, length, access); if (IS_ERR(umem)) { rxe_dbg_mr(mr, "Unable to pin memory region err = %d\n", (int)PTR_ERR(umem)); - err = PTR_ERR(umem); - goto err_out; + return PTR_ERR(umem); } - num_buf = ib_umem_num_pages(umem); - - rxe_mr_init(access, mr); - - err = rxe_mr_alloc(mr, num_buf); + err = rxe_mr_fill_pages_from_sgt(mr, &umem->sgt_append.sgt); if (err) { - rxe_dbg_mr(mr, "Unable to allocate memory for map\n"); - goto err_release_umem; + ib_umem_release(umem); + return err; } - num_buf = 0; - map = mr->map; - if (length > 0) { - buf = map[0]->buf; + mr->umem = umem; + mr->ibmr.type = IB_MR_TYPE_USER; + mr->state = RXE_MR_STATE_VALID; - for_each_sgtable_page (&umem->sgt_append.sgt, &sg_iter, 0) { - if (num_buf >= RXE_BUF_PER_MAP) { - map++; - buf = map[0]->buf; - num_buf = 0; - } + return 0; +} - vaddr = page_address(sg_page_iter_page(&sg_iter)); - if (!vaddr) { - rxe_dbg_mr(mr, "Unable to get virtual address\n"); - err = -ENOMEM; - goto err_release_umem; - } - buf->addr = (uintptr_t)vaddr; - buf->size = mr_page_size(mr); - num_buf++; - buf++; +static int rxe_mr_alloc(struct rxe_mr *mr, int num_buf) +{ + XA_STATE(xas, &mr->pages, 0); + int i = 0; + int err; + + xa_init(&mr->pages); + do { + xas_lock(&xas); + while (i != num_buf) { + xas_store(&xas, XA_ZERO_ENTRY); + if (xas_error(&xas)) + break; + xas_next(&xas); + i++; } - } + xas_unlock(&xas); + } while (xas_nomem(&xas, GFP_KERNEL)); - mr->umem = umem; - mr->access = access; - mr->page_offset = ib_umem_offset(umem); - mr->state = RXE_MR_STATE_VALID; - mr->ibmr.type = IB_MR_TYPE_USER; + err = xas_error(&xas); + if (err) + return err; - return 0; + mr->num_buf = num_buf; -err_release_umem: - ib_umem_release(umem); -err_out: - return err; + return 0; } int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr) @@ -192,7 +173,6 @@ int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr) if (err) goto err1; - mr->max_buf = max_pages; mr->state = RXE_MR_STATE_FREE; mr->ibmr.type = IB_MR_TYPE_MEM_REG; @@ -202,193 +182,142 @@ int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr) return err; } -static int rxe_set_page(struct ib_mr *ibmr, u64 addr) +static int rxe_set_page(struct ib_mr *ibmr, u64 iova) { struct rxe_mr *mr = to_rmr(ibmr); - struct rxe_map *map; - struct rxe_phys_buf *buf; + struct page *page = virt_to_page(iova & mr->page_mask); + XA_STATE(xas, &mr->pages, mr->nbuf); + int err; if (unlikely(mr->nbuf == mr->num_buf)) return -ENOMEM; - map = mr->map[mr->nbuf / RXE_BUF_PER_MAP]; - buf = &map->buf[mr->nbuf % RXE_BUF_PER_MAP]; + do { + xas_lock(&xas); + xas_store(&xas, page); + xas_unlock(&xas); + } while (xas_nomem(&xas, GFP_KERNEL)); - buf->addr = addr; - buf->size = ibmr->page_size; - mr->nbuf++; + err = xas_error(&xas); + if (err) + return err; + mr->nbuf++; return 0; } -int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, +int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sgl, int sg_nents, unsigned int *sg_offset) { struct rxe_mr *mr = to_rmr(ibmr); unsigned int page_size = mr_page_size(mr); + mr->nbuf = 0; mr->page_shift = ilog2(page_size); mr->page_mask = ~((u64)page_size - 1); - mr->page_offset = ibmr->iova & (page_size - 1); - - mr->nbuf = 0; + mr->page_offset = mr->ibmr.iova & (page_size - 1); - return ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, rxe_set_page); + return ib_sg_to_pages(ibmr, sgl, sg_nents, sg_offset, rxe_set_page); } -static void lookup_iova(struct rxe_mr *mr, u64 iova, int *m_out, int *n_out, - size_t *offset_out) +/* + * TODO: Attempting to modify the mr page map between the time + * a packet is received and the map is referenced as here + * in xa_load(&mr->pages) will cause problems. It is OK to + * deregister the mr since the mr reference counts will preserve + * it until memory accesses are complete. Currently reregister mr + * operations are not supported by the rxe driver but could be + * in the future. Invalidate followed by fast_reg mr will change + * the map and then the rkey so delayed packets arriving in the + * middle could use the wrong map entries. This isn't new but was + * already the case in the earlier implementation. This won't be + * a problem for well behaved programs which wait until all the + * outstanding packets for the first FMR before remapping to the + * second. + */ +static int rxe_mr_copy_xarray(struct rxe_mr *mr, void *addr, + unsigned long index, + unsigned int page_offset, unsigned int length, + enum rxe_mr_copy_dir dir) { - size_t offset = iova - mr->ibmr.iova + mr->page_offset; - int map_index; - int buf_index; - u64 length; - - if (likely(mr->page_shift)) { - *offset_out = offset & (mr_page_size(mr) - 1); - offset >>= mr->page_shift; - *n_out = offset & mr->map_mask; - *m_out = offset >> mr->map_shift; - } else { - map_index = 0; - buf_index = 0; - - length = mr->map[map_index]->buf[buf_index].size; - - while (offset >= length) { - offset -= length; - buf_index++; - - if (buf_index == RXE_BUF_PER_MAP) { - map_index++; - buf_index = 0; - } - length = mr->map[map_index]->buf[buf_index].size; - } + struct page *page; + unsigned int bytes; + void *va; - *m_out = map_index; - *n_out = buf_index; - *offset_out = offset; + while (length) { + page = xa_load(&mr->pages, index); + if (WARN_ON(!page)) + return -EINVAL; + + bytes = min_t(unsigned int, length, + mr_page_size(mr) - page_offset); + va = kmap_local_page(page); + if (dir == RXE_FROM_MR_OBJ) + memcpy(addr, va + page_offset, bytes); + else + memcpy(va + page_offset, addr, bytes); + kunmap_local(va); + + page_offset = 0; + addr += bytes; + length -= bytes; + index++; } + + return 0; } -void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length) +static void rxe_mr_copy_dma(struct rxe_mr *mr, u64 iova, void *addr, + unsigned int page_offset, unsigned int length, + enum rxe_mr_copy_dir dir) { - size_t offset; - int m, n; - void *addr; - - if (mr->state != RXE_MR_STATE_VALID) { - rxe_dbg_mr(mr, "Not in valid state\n"); - addr = NULL; - goto out; - } - - if (!mr->map) { - addr = (void *)(uintptr_t)iova; - goto out; - } - - if (mr_check_range(mr, iova, length)) { - rxe_dbg_mr(mr, "Range violation\n"); - addr = NULL; - goto out; - } - - lookup_iova(mr, iova, &m, &n, &offset); + unsigned int bytes; + struct page *page; + u8 *va; - if (offset + length > mr->map[m]->buf[n].size) { - rxe_dbg_mr(mr, "Crosses page boundary\n"); - addr = NULL; - goto out; + while (length) { + page = virt_to_page(iova & mr->page_mask); + va = kmap_local_page(page); + bytes = min_t(unsigned int, length, PAGE_SIZE - page_offset); + + if (dir == RXE_TO_MR_OBJ) + memcpy(va + page_offset, addr, bytes); + else + memcpy(addr, va + page_offset, bytes); + + kunmap_local(va); + page_offset = 0; + iova += bytes; + addr += bytes; + length -= bytes; } - - addr = (void *)(uintptr_t)mr->map[m]->buf[n].addr + offset; - -out: - return addr; } -/* copy data from a range (vaddr, vaddr+length-1) to or from - * a mr object starting at iova. - */ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, enum rxe_mr_copy_dir dir) { - int err; - int bytes; - u8 *va; - struct rxe_map **map; - struct rxe_phys_buf *buf; - int m; - int i; - size_t offset; + unsigned int page_offset; + unsigned long index; + int err; if (length == 0) return 0; if (mr->ibmr.type == IB_MR_TYPE_DMA) { - u8 *src, *dest; - - src = (dir == RXE_TO_MR_OBJ) ? addr : ((void *)(uintptr_t)iova); - - dest = (dir == RXE_TO_MR_OBJ) ? ((void *)(uintptr_t)iova) : addr; - - memcpy(dest, src, length); - + page_offset = iova & (PAGE_SIZE - 1); + rxe_mr_copy_dma(mr, iova, addr, page_offset, length, dir); return 0; } - WARN_ON_ONCE(!mr->map); - err = mr_check_range(mr, iova, length); - if (err) { - err = -EFAULT; - goto err1; - } - - lookup_iova(mr, iova, &m, &i, &offset); - - map = mr->map + m; - buf = map[0]->buf + i; - - while (length > 0) { - u8 *src, *dest; - - va = (u8 *)(uintptr_t)buf->addr + offset; - src = (dir == RXE_TO_MR_OBJ) ? addr : va; - dest = (dir == RXE_TO_MR_OBJ) ? va : addr; - - bytes = buf->size - offset; - - if (bytes > length) - bytes = length; - - memcpy(dest, src, bytes); - - length -= bytes; - addr += bytes; - - offset = 0; - buf++; - i++; - - if (i == RXE_BUF_PER_MAP) { - i = 0; - map++; - buf = map[0]->buf; - } - } - - return 0; + if (err) + return err; -err1: - return err; + page_offset = iova & (mr_page_size(mr) - 1); + index = rxe_mr_iova_to_index(mr, iova); + return rxe_mr_copy_xarray(mr, addr, index, page_offset, length, dir); } -/* copy data in or out of a wqe, i.e. sg list - * under the control of a dma descriptor - */ int copy_data( struct rxe_pd *pd, int access, @@ -455,7 +384,6 @@ int copy_data( if (bytes > 0) { iova = sge->addr + offset; - err = rxe_mr_copy(mr, iova, addr, bytes, dir); if (err) goto err2; @@ -484,20 +412,47 @@ int copy_data( static DEFINE_SPINLOCK(atomic_ops_lock); +/* + * Returns: + * 0 on success + * -1 address is misaligned + * -2 access violations + */ int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode, u64 compare, u64 swap_add, u64 *orig_val) { - u64 *vaddr = iova_to_vaddr(mr, iova, sizeof(u64)); + unsigned int page_offset; + struct page *page; u64 value; + u64 *va; - /* needs to match rxe_resp.c */ - if (mr->state != RXE_MR_STATE_VALID || !vaddr) - return -EFAULT; - if ((uintptr_t)vaddr & 7) - return -EINVAL; + if (unlikely(mr->state != RXE_MR_STATE_VALID)) + return -2; + + if (mr->ibmr.type == IB_MR_TYPE_DMA) { + page_offset = iova & (PAGE_SIZE - 1); + page = virt_to_page(iova & PAGE_MASK); + } else { + unsigned long index; + int err; + + err = mr_check_range(mr, iova, 8); + if (err) + return err; + page_offset = iova & (mr_page_size(mr) - 1); + index = rxe_mr_iova_to_index(mr, iova); + page = xa_load(&mr->pages, index); + if (!page) + return -2; + } + + if (unlikely(page_offset & 0x7)) + return -1; + + va = kmap_local_page(page); spin_lock_bh(&atomic_ops_lock); - value = *orig_val = *vaddr; + *orig_val = value = va[page_offset >> 3]; if (opcode == IB_OPCODE_RC_COMPARE_SWAP) { if (value == compare) @@ -506,9 +461,11 @@ int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode, value += swap_add; } - *vaddr = value; + va[page_offset >> 3] = value; spin_unlock_bh(&atomic_ops_lock); + kunmap_local(va); + return 0; } @@ -520,9 +477,11 @@ int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode, */ int rxe_mr_do_atomic_write(struct rxe_mr *mr, u64 iova, void *addr) { - u64 *vaddr; - u64 value; + unsigned int page_offset; + struct page *page; unsigned int length = 8; + u64 value; + u64 *va; /* See IBA oA19-28 */ if (unlikely(mr->state != RXE_MR_STATE_VALID)) { @@ -530,23 +489,38 @@ int rxe_mr_do_atomic_write(struct rxe_mr *mr, u64 iova, void *addr) return -2; } + if (mr->ibmr.type == IB_MR_TYPE_DMA) { + page_offset = iova & (PAGE_SIZE - 1); + page = virt_to_page(iova & PAGE_MASK); + } else { + unsigned long index; + int err; + + /* See IBA oA19-28 */ + err = mr_check_range(mr, iova, length); + if (unlikely(err)) { + rxe_dbg_mr(mr, "iova out of range"); + return -2; + } + page_offset = iova & (mr_page_size(mr) - 1); + index = rxe_mr_iova_to_index(mr, iova); + page = xa_load(&mr->pages, index); + if (WARN_ON(!page)) + return -2; + } + /* See IBA A19.4.2 */ - if (unlikely((uintptr_t)vaddr & 0x7 || iova & 0x7)) { + if (iova & 0x7) { rxe_dbg_mr(mr, "misaligned address"); return -1; } - vaddr = iova_to_vaddr(mr, iova, length); - if (unlikely(!vaddr)) { - rxe_dbg_mr(mr, "iova out of range"); - return -2; - } - - /* this makes no sense. What of payload is not 8? */ + va = kmap_local_page(page); memcpy(&value, addr, length); /* Do atomic write after all prior operations have completed */ - smp_store_release(vaddr, value); + smp_store_release(&va[page_offset >> 3], value); + kunmap_local(va); return 0; } @@ -584,12 +558,6 @@ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length) return 0; } -/* (1) find the mr corresponding to lkey/rkey - * depending on lookup_type - * (2) verify that the (qp) pd matches the mr pd - * (3) verify that the mr can support the requested access - * (4) verify that mr state is valid - */ struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type) { @@ -710,15 +678,8 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) void rxe_mr_cleanup(struct rxe_pool_elem *elem) { struct rxe_mr *mr = container_of(elem, typeof(*mr), elem); - int i; rxe_put(mr_pd(mr)); ib_umem_release(mr->umem); - - if (mr->map) { - for (i = 0; i < mr->num_map; i++) - kfree(mr->map[i]); - - kfree(mr->map); - } + xa_destroy(&mr->pages); } diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index abe9cfd935c2..188e7315158a 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -635,7 +635,7 @@ static enum resp_states atomic_reply(struct rxe_qp *qp, atmeth_comp(pkt), atmeth_swap_add(pkt), &res->atomic.orig_val); - if (err == -EINVAL) + if (err == -1) return RESPST_ERR_MISALIGNED_ATOMIC; else if (err) return RESPST_ERR_RKEY_VIOLATION; @@ -659,6 +659,7 @@ static enum resp_states atomic_write_reply(struct rxe_qp *qp, struct resp_res *res = qp->resp.res; struct rxe_mr *mr = qp->resp.mr; void *addr = payload_addr(pkt); + unsigned int length = 8; int err; if (!res) { @@ -676,7 +677,7 @@ static enum resp_states atomic_write_reply(struct rxe_qp *qp, else return RESPST_ERR_RKEY_VIOLATION; - qp->resp.resid -= 8; + qp->resp.resid -= length; qp->resp.msn++; /* next expected psn, read handles this separately */ diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index ddaa4de5e1c7..7f360c402473 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -277,17 +277,6 @@ enum rxe_mr_lookup_type { RXE_LOOKUP_REMOTE, }; -#define RXE_BUF_PER_MAP (PAGE_SIZE / sizeof(struct rxe_phys_buf)) - -struct rxe_phys_buf { - u64 addr; - u64 size; -}; - -struct rxe_map { - struct rxe_phys_buf buf[RXE_BUF_PER_MAP]; -}; - static inline int rkey_is_mw(u32 rkey) { u32 index = rkey >> 8; @@ -305,22 +294,16 @@ struct rxe_mr { u32 rkey; enum rxe_mr_state state; int access; + atomic_t num_mw; unsigned int page_offset; unsigned int page_shift; u64 page_mask; - int map_shift; - int map_mask; u32 num_buf; u32 nbuf; - u32 max_buf; - u32 num_map; - - atomic_t num_mw; - - struct rxe_map **map; + struct xarray pages; }; static inline unsigned int mr_page_size(struct rxe_mr *mr)