From patchwork Thu Jan 19 23:59:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13108998 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B39E7C004D4 for ; Fri, 20 Jan 2023 00:03:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229464AbjATADw (ORCPT ); Thu, 19 Jan 2023 19:03:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229648AbjATADc (ORCPT ); Thu, 19 Jan 2023 19:03:32 -0500 Received: from mail-oi1-x235.google.com (mail-oi1-x235.google.com [IPv6:2607:f8b0:4864:20::235]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C0369FDCE for ; Thu, 19 Jan 2023 16:02:51 -0800 (PST) Received: by mail-oi1-x235.google.com with SMTP id s66so3115533oib.7 for ; Thu, 19 Jan 2023 16:02:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sFVv+FTu2viKrddF8zqbqpZJ1cote5/OKzz+8VqeQLQ=; b=BcMrZysQxZWL+p7w1cw2QXid52MVQPSK66P3CgeIg5oF9A1XqPbEMtGnYx05OlAZyd 2LPIWz4o8HpoabCQlqL+WmfJlSZDuVd9NSVfY1ecH3DnBlPeZkTic8Q7SHB7/QHGgt/3 7WOz4fPC8+OLleELg8gXUFMDQ8nuzLX7HKCd5sHpIAyQCqbP/CgFqwXfSzOl+7bcTcm1 aTcEsGn8WeyjK7eRGpjYkiJkMvyCvOjjEPlfSNSW+WSyRbT1EbA6ewx28YTxF3wfSs03 9IdlZqmytBdWGTgTmWEzPT1GBswjOtQTAFZjbaLH1YSL6k+Zhd7N0wJUUWrMxcrNHnA9 pxCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sFVv+FTu2viKrddF8zqbqpZJ1cote5/OKzz+8VqeQLQ=; b=npgViKhEoINsh8hMZqGLBHKc1T/YFDYtBk6zJcOqtBeN24Z+6qbpZ9/fxwPcQzsofo gAuTos/pHfaHtgQ9OwXKUYPP12nE/Cxf+zQiAg0UsadiWyUXYV6x7ayjV2jdrYiBnpQf A54iDnRBErlTpeDokDnZgjuC4KLUvjAS3cISL6Or62SPC7y3h5XNMNcVE1aV5GCh01K+ tYTl6UhwvGD2S2uRLmMELWBvSKBxBv6bfIy8ToWLlmo0mF/Yx7RqKVdQjyvGtFfdRrtS UwO26CP3tpXczAnvjiPzXepeoDRkHiF3F9TkvRNeudBsCtSZdB1HhNDa3LCaKAbgDo/5 sVOw== X-Gm-Message-State: AFqh2kryhYU3xutF/u1PmnG1oS6a91w0U/6iN8oKLW0FsCYfP7XPyIjB omuQA69zPNCTTxRWnXzYrsM= X-Google-Smtp-Source: AMrXdXult1mRe2NxXOb8q+gwvbZ2eOKu5ffMfDGhKvs5T1bbz0lSr43ivc4YAxt08YrsORI3KLqPYg== X-Received: by 2002:aca:1902:0:b0:35b:7145:2bd5 with SMTP id l2-20020aca1902000000b0035b71452bd5mr6328219oii.16.1674172970328; Thu, 19 Jan 2023 16:02:50 -0800 (PST) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-d086-74d8-5274-c0f1.res6.spectrum.com. [2603:8081:140c:1a00:d086:74d8:5274:c0f1]) by smtp.gmail.com with ESMTPSA id bj23-20020a056808199700b0036718f58b7esm6139394oib.15.2023.01.19.16.02.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Jan 2023 16:02:49 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, leonro@nvidia.com, yangx.jy@fujitsu.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v7 1/6] RDMA/rxe: Cleanup mr_check_range Date: Thu, 19 Jan 2023 17:59:32 -0600 Message-Id: <20230119235936.19728-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230119235936.19728-1-rpearsonhpe@gmail.com> References: <20230119235936.19728-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Remove blank lines and replace EFAULT by EINVAL when an invalid mr type is used. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mr.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 072eac4b65d2..632ee1e516a1 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -26,8 +26,6 @@ u8 rxe_get_next_key(u32 last_key) int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length) { - - switch (mr->ibmr.type) { case IB_MR_TYPE_DMA: return 0; @@ -41,7 +39,7 @@ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length) default: rxe_dbg_mr(mr, "type (%d) not supported\n", mr->ibmr.type); - return -EFAULT; + return -EINVAL; } } From patchwork Thu Jan 19 23:59:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13108999 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECC65C46467 for ; Fri, 20 Jan 2023 00:03:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229495AbjATADw (ORCPT ); Thu, 19 Jan 2023 19:03:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229660AbjATADd (ORCPT ); Thu, 19 Jan 2023 19:03:33 -0500 Received: from mail-oi1-x233.google.com (mail-oi1-x233.google.com [IPv6:2607:f8b0:4864:20::233]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BEABA1035 for ; Thu, 19 Jan 2023 16:02:52 -0800 (PST) Received: by mail-oi1-x233.google.com with SMTP id n8so3171875oih.0 for ; Thu, 19 Jan 2023 16:02:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MSc+ZeVhmJVAyrOad5GEnPdKs11Mmr6EK22kGWHehxY=; b=fAKyA6c3bHM+cvjHr/noYk6kQripzXChMINRr2gpwXhX58UY6ifdrA7s1IeOKgnXcO G3oaVQhYodv6khR8nD/QcHZeb8nmZcVDP5atJZKCXygXjuuuN6yrALxZpa+EIFD1/qYa WWWB2uSe2TkYOJRv07QU1YPz5z5H8hwZSvXX3QJHofBAslr85gI/iRfH/9d59+zQDwFN oZBlTCdJgfuEe1yRMsiTnBiI9qRkE8osaClPUA5resI+WEnNgoQThMn4rhoenaFrPabJ SVq41uFj5bG1yOwqXhDq8cQqSWE0HDPrqiLtH0vAytlSgPVFAGH9JzGbp85XIje7Blpt +WNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MSc+ZeVhmJVAyrOad5GEnPdKs11Mmr6EK22kGWHehxY=; b=x//xS/WOkC8TFmyS5kEXfIT43vhQQV+Q3mNP7AkfuYeNu7mzG3V02n/FnOaCZR4vMr MSABr6jviahQCk7pTs5/uDRVDXoqGxZXoCKsu9/Q66oviC2/0t66ATV+r/NKcvsD8W78 hzqaspqv42k7Hl3TNKj/6oN2hEBN+J2crpgrWK3Ucorl4qSAPxJ4Se+uaqBWPFnRYZzO wbVCdQh644jeczlGGvM5IATsEH9XYorHGvLVC9g0Hsg4gXlYp5prJwxaxVL272A5qeWg b+7EIWQzfhOhhRj3LqjzMr2Q8dR8OlbrWUi3xfsHTcC2JfdO6iMw/g4Xk5FHsM7nrm02 MPMg== X-Gm-Message-State: AFqh2kqbljGZn4QX/qH3AhMuQnpabBMebk32oGJ1UUbApqDjM12ISuOP 3BSIndP1RNEe6eblNqMThDk= X-Google-Smtp-Source: AMrXdXvFGoMVi91cacvR1RjnH0cN+3TuzCjXonxdZPQZyrmqZpIZo59KJRear0T9tLqGaG30HUrowg== X-Received: by 2002:a05:6808:f14:b0:364:e9f4:9dd with SMTP id m20-20020a0568080f1400b00364e9f409ddmr7814678oiw.47.1674172971460; Thu, 19 Jan 2023 16:02:51 -0800 (PST) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-d086-74d8-5274-c0f1.res6.spectrum.com. [2603:8081:140c:1a00:d086:74d8:5274:c0f1]) by smtp.gmail.com with ESMTPSA id bj23-20020a056808199700b0036718f58b7esm6139394oib.15.2023.01.19.16.02.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Jan 2023 16:02:51 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, leonro@nvidia.com, yangx.jy@fujitsu.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v7 2/6] RDMA/rxe: Move rxe_map_mr_sg to rxe_mr.c Date: Thu, 19 Jan 2023 17:59:33 -0600 Message-Id: <20230119235936.19728-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230119235936.19728-1-rpearsonhpe@gmail.com> References: <20230119235936.19728-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move rxe_map_mr_sg() to rxe_mr.c where it makes a little more sense. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 2 ++ drivers/infiniband/sw/rxe/rxe_mr.c | 36 +++++++++++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_verbs.c | 36 --------------------------- 3 files changed, 38 insertions(+), 36 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 948ce4902b10..29b6c2143045 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -69,6 +69,8 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, enum rxe_mr_copy_dir dir); int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, void *addr, int length, enum rxe_mr_copy_dir dir); +int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, + int sg_nents, unsigned int *sg_offset); void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length); struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 632ee1e516a1..229c7259644c 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -223,6 +223,42 @@ int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr) return err; } +static int rxe_set_page(struct ib_mr *ibmr, u64 addr) +{ + struct rxe_mr *mr = to_rmr(ibmr); + struct rxe_map *map; + struct rxe_phys_buf *buf; + + if (unlikely(mr->nbuf == mr->num_buf)) + return -ENOMEM; + + map = mr->map[mr->nbuf / RXE_BUF_PER_MAP]; + buf = &map->buf[mr->nbuf % RXE_BUF_PER_MAP]; + + buf->addr = addr; + buf->size = ibmr->page_size; + mr->nbuf++; + + return 0; +} + +int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, + int sg_nents, unsigned int *sg_offset) +{ + struct rxe_mr *mr = to_rmr(ibmr); + int n; + + mr->nbuf = 0; + + n = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, rxe_set_page); + + mr->page_shift = ilog2(ibmr->page_size); + mr->page_mask = ibmr->page_size - 1; + mr->offset = ibmr->iova & mr->page_mask; + + return n; +} + static void lookup_iova(struct rxe_mr *mr, u64 iova, int *m_out, int *n_out, size_t *offset_out) { diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 025b35bf014e..7a902e0a0607 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -948,42 +948,6 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, return ERR_PTR(err); } -static int rxe_set_page(struct ib_mr *ibmr, u64 addr) -{ - struct rxe_mr *mr = to_rmr(ibmr); - struct rxe_map *map; - struct rxe_phys_buf *buf; - - if (unlikely(mr->nbuf == mr->num_buf)) - return -ENOMEM; - - map = mr->map[mr->nbuf / RXE_BUF_PER_MAP]; - buf = &map->buf[mr->nbuf % RXE_BUF_PER_MAP]; - - buf->addr = addr; - buf->size = ibmr->page_size; - mr->nbuf++; - - return 0; -} - -static int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, - int sg_nents, unsigned int *sg_offset) -{ - struct rxe_mr *mr = to_rmr(ibmr); - int n; - - mr->nbuf = 0; - - n = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, rxe_set_page); - - mr->page_shift = ilog2(ibmr->page_size); - mr->page_mask = ibmr->page_size - 1; - mr->offset = ibmr->iova & mr->page_mask; - - return n; -} - static ssize_t parent_show(struct device *device, struct device_attribute *attr, char *buf) { From patchwork Thu Jan 19 23:59:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13109000 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E7C6C004D4 for ; Fri, 20 Jan 2023 00:03:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229510AbjATADx (ORCPT ); Thu, 19 Jan 2023 19:03:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229684AbjATADd (ORCPT ); Thu, 19 Jan 2023 19:03:33 -0500 Received: from mail-oi1-x233.google.com (mail-oi1-x233.google.com [IPv6:2607:f8b0:4864:20::233]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5DBCEA1036 for ; Thu, 19 Jan 2023 16:02:53 -0800 (PST) Received: by mail-oi1-x233.google.com with SMTP id r205so3110678oib.9 for ; Thu, 19 Jan 2023 16:02:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=f+gP/lvHHKv8wsim8rUNS15T+ZqyfL9oBJpF9AVIHZA=; b=o+pXYGbd0v7U+Pg87pi0lcGlh3M+YgaykGuHka9NnOkubotlhRCw/jlH8o/OeNvn7m TBdvZrvYnHpCmR8cYFd+zfUFexP2QFahTJS+8lux3YFsHouKeL6VrGvzjyfAHJo0YlP9 cHcqcq1Eva23zkBl2sMLDH26XXyRd2OrM6KLQ+bxGbFrxrb6FULsd1tTyFu7T1h0w05n /B//5mOCbFGbs2LcWqD+ucZO6ZHiLYJV+3u+W82ab+mzn2MMF8YhR5f60F4332ERLoLo n8WeLwj4tHh9hezA6GpFYZ3/EBmZuapCH3lQguv7WCiSYsA5ykdozRSEIB1uKM6uC03W dsjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=f+gP/lvHHKv8wsim8rUNS15T+ZqyfL9oBJpF9AVIHZA=; b=gJObNiUqnEjWvDMI3QpFyV8IgQ0LTsujc8/ydS/shmEiNeZ0/9oTGNem9oKa/GiLfR TYqztMBbGSnFvpFpcoVZ7a/GkhQonn9sqGANlZPSu2wEgx6FUUvLog0sgbAb0BoKe5CR o59qMBL5fuGDHj02UKENbS2vM6AnJ1/XlqK0P2EOgJYJF7Y1JuHoHXfolD/KxSxAsGme M5AUzkS1CFkxZ9BrynlrHVsxREgW2BJbRaRGlXY/1ULiOunXfhXkKejqwHs8FILpcV/g n5HekdUUfNZAc4FyTz+Rc6pY3c6oop1DHXqyElK6nz3IE4T55PoDb3fny5GlPuoUwBBa mwMg== X-Gm-Message-State: AFqh2krBpNTxkES9kV2C7STZLmJKYvIX1uOodP5Uf751uyxxPdmSZYJY WNEJ3H5/wA6Q8D5gO2GuXos= X-Google-Smtp-Source: AMrXdXuF6Q9e8XsvkoaxHKZ5YoVX66wz3SSuvGQHju+w/rlSe579eE7FEvZkq3coYuLeTK9srasLFg== X-Received: by 2002:a54:4009:0:b0:364:3792:97aa with SMTP id x9-20020a544009000000b00364379297aamr6274782oie.10.1674172972654; Thu, 19 Jan 2023 16:02:52 -0800 (PST) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-d086-74d8-5274-c0f1.res6.spectrum.com. [2603:8081:140c:1a00:d086:74d8:5274:c0f1]) by smtp.gmail.com with ESMTPSA id bj23-20020a056808199700b0036718f58b7esm6139394oib.15.2023.01.19.16.02.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Jan 2023 16:02:52 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, leonro@nvidia.com, yangx.jy@fujitsu.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v7 3/6] RDMA-rxe: Isolate mr code from atomic_reply() Date: Thu, 19 Jan 2023 17:59:34 -0600 Message-Id: <20230119235936.19728-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230119235936.19728-1-rpearsonhpe@gmail.com> References: <20230119235936.19728-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Isolate mr specific code from atomic_reply() in rxe_resp.c into a subroutine rxe_mr_do_atomic_op() in rxe_mr.c. Minor cleanups to rxe_check_range() and iova_to_vaddr(). Move enum resp_state to rxe.h Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.h | 38 +++++++++++++ drivers/infiniband/sw/rxe/rxe_loc.h | 2 + drivers/infiniband/sw/rxe/rxe_mr.c | 83 ++++++++++++++++++---------- drivers/infiniband/sw/rxe/rxe_resp.c | 82 ++++----------------------- 4 files changed, 105 insertions(+), 100 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.h b/drivers/infiniband/sw/rxe/rxe.h index ab334900fcc3..2415f3704f57 100644 --- a/drivers/infiniband/sw/rxe/rxe.h +++ b/drivers/infiniband/sw/rxe/rxe.h @@ -57,6 +57,44 @@ #define rxe_dbg_mw(mw, fmt, ...) ibdev_dbg((mw)->ibmw.device, \ "mw#%d %s: " fmt, (mw)->elem.index, __func__, ##__VA_ARGS__) +/* responder states */ +enum resp_states { + RESPST_NONE, + RESPST_GET_REQ, + RESPST_CHK_PSN, + RESPST_CHK_OP_SEQ, + RESPST_CHK_OP_VALID, + RESPST_CHK_RESOURCE, + RESPST_CHK_LENGTH, + RESPST_CHK_RKEY, + RESPST_EXECUTE, + RESPST_READ_REPLY, + RESPST_ATOMIC_REPLY, + RESPST_ATOMIC_WRITE_REPLY, + RESPST_PROCESS_FLUSH, + RESPST_COMPLETE, + RESPST_ACKNOWLEDGE, + RESPST_CLEANUP, + RESPST_DUPLICATE_REQUEST, + RESPST_ERR_MALFORMED_WQE, + RESPST_ERR_UNSUPPORTED_OPCODE, + RESPST_ERR_MISALIGNED_ATOMIC, + RESPST_ERR_PSN_OUT_OF_SEQ, + RESPST_ERR_MISSING_OPCODE_FIRST, + RESPST_ERR_MISSING_OPCODE_LAST_C, + RESPST_ERR_MISSING_OPCODE_LAST_D1E, + RESPST_ERR_TOO_MANY_RDMA_ATM_REQ, + RESPST_ERR_RNR, + RESPST_ERR_RKEY_VIOLATION, + RESPST_ERR_INVALIDATE_RKEY, + RESPST_ERR_LENGTH, + RESPST_ERR_CQ_OVERFLOW, + RESPST_ERROR, + RESPST_RESET, + RESPST_DONE, + RESPST_EXIT, +}; + void rxe_set_mtu(struct rxe_dev *rxe, unsigned int dev_mtu); int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name); diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 29b6c2143045..bcb1bbcf50df 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -72,6 +72,8 @@ int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset); void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length); +int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode, + u64 compare, u64 swap_add, u64 *orig_val); struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type); int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 229c7259644c..df9741474f1f 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -32,13 +32,15 @@ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length) case IB_MR_TYPE_USER: case IB_MR_TYPE_MEM_REG: - if (iova < mr->ibmr.iova || length > mr->ibmr.length || - iova > mr->ibmr.iova + mr->ibmr.length - length) - return -EFAULT; + if (iova < mr->ibmr.iova || + iova + length > mr->ibmr.iova + mr->ibmr.length) { + rxe_dbg_mr(mr, "iova/length out of range"); + return -EINVAL; + } return 0; default: - rxe_dbg_mr(mr, "type (%d) not supported\n", mr->ibmr.type); + rxe_dbg_mr(mr, "mr type not supported\n"); return -EINVAL; } } @@ -299,37 +301,22 @@ void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length) { size_t offset; int m, n; - void *addr; - if (mr->state != RXE_MR_STATE_VALID) { - rxe_dbg_mr(mr, "Not in valid state\n"); - addr = NULL; - goto out; - } + if (mr->state != RXE_MR_STATE_VALID) + return NULL; - if (!mr->map) { - addr = (void *)(uintptr_t)iova; - goto out; - } + if (mr->ibmr.type == IB_MR_TYPE_DMA) + return (void *)(uintptr_t)iova; - if (mr_check_range(mr, iova, length)) { - rxe_dbg_mr(mr, "Range violation\n"); - addr = NULL; - goto out; - } + if (mr_check_range(mr, iova, length)) + return NULL; lookup_iova(mr, iova, &m, &n, &offset); - if (offset + length > mr->map[m]->buf[n].size) { - rxe_dbg_mr(mr, "Crosses page boundary\n"); - addr = NULL; - goto out; - } - - addr = (void *)(uintptr_t)mr->map[m]->buf[n].addr + offset; + if (offset + length > mr->map[m]->buf[n].size) + return NULL; -out: - return addr; + return (void *)(uintptr_t)mr->map[m]->buf[n].addr + offset; } int rxe_flush_pmem_iova(struct rxe_mr *mr, u64 iova, int length) @@ -538,6 +525,46 @@ int copy_data( return err; } +/* Guarantee atomicity of atomic operations at the machine level. */ +static DEFINE_SPINLOCK(atomic_ops_lock); + +int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode, + u64 compare, u64 swap_add, u64 *orig_val) +{ + u64 *va; + u64 value; + + if (mr->state != RXE_MR_STATE_VALID) { + rxe_dbg_mr(mr, "mr not in valid state"); + return RESPST_ERR_RKEY_VIOLATION; + } + + va = iova_to_vaddr(mr, iova, sizeof(u64)); + if (!va) { + rxe_dbg_mr(mr, "iova out of range"); + return RESPST_ERR_RKEY_VIOLATION; + } + + if ((uintptr_t)va & 0x7) { + rxe_dbg_mr(mr, "iova not aligned"); + return RESPST_ERR_MISALIGNED_ATOMIC; + } + + spin_lock_bh(&atomic_ops_lock); + value = *orig_val = *va; + + if (opcode == IB_OPCODE_RC_COMPARE_SWAP) { + if (value == compare) + *va = swap_add; + } else { + value += swap_add; + *va = value; + } + spin_unlock_bh(&atomic_ops_lock); + + return 0; +} + int advance_dma_data(struct rxe_dma_info *dma, unsigned int length) { struct rxe_sge *sge = &dma->sge[dma->cur_sge]; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index c74972244f08..9d4b4e9b42fc 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -10,43 +10,6 @@ #include "rxe_loc.h" #include "rxe_queue.h" -enum resp_states { - RESPST_NONE, - RESPST_GET_REQ, - RESPST_CHK_PSN, - RESPST_CHK_OP_SEQ, - RESPST_CHK_OP_VALID, - RESPST_CHK_RESOURCE, - RESPST_CHK_LENGTH, - RESPST_CHK_RKEY, - RESPST_EXECUTE, - RESPST_READ_REPLY, - RESPST_ATOMIC_REPLY, - RESPST_ATOMIC_WRITE_REPLY, - RESPST_PROCESS_FLUSH, - RESPST_COMPLETE, - RESPST_ACKNOWLEDGE, - RESPST_CLEANUP, - RESPST_DUPLICATE_REQUEST, - RESPST_ERR_MALFORMED_WQE, - RESPST_ERR_UNSUPPORTED_OPCODE, - RESPST_ERR_MISALIGNED_ATOMIC, - RESPST_ERR_PSN_OUT_OF_SEQ, - RESPST_ERR_MISSING_OPCODE_FIRST, - RESPST_ERR_MISSING_OPCODE_LAST_C, - RESPST_ERR_MISSING_OPCODE_LAST_D1E, - RESPST_ERR_TOO_MANY_RDMA_ATM_REQ, - RESPST_ERR_RNR, - RESPST_ERR_RKEY_VIOLATION, - RESPST_ERR_INVALIDATE_RKEY, - RESPST_ERR_LENGTH, - RESPST_ERR_CQ_OVERFLOW, - RESPST_ERROR, - RESPST_RESET, - RESPST_DONE, - RESPST_EXIT, -}; - static char *resp_state_name[] = { [RESPST_NONE] = "NONE", [RESPST_GET_REQ] = "GET_REQ", @@ -725,17 +688,12 @@ static enum resp_states process_flush(struct rxe_qp *qp, return RESPST_ACKNOWLEDGE; } -/* Guarantee atomicity of atomic operations at the machine level. */ -static DEFINE_SPINLOCK(atomic_ops_lock); - static enum resp_states atomic_reply(struct rxe_qp *qp, - struct rxe_pkt_info *pkt) + struct rxe_pkt_info *pkt) { - u64 *vaddr; - enum resp_states ret; struct rxe_mr *mr = qp->resp.mr; struct resp_res *res = qp->resp.res; - u64 value; + int err; if (!res) { res = rxe_prepare_res(qp, pkt, RXE_ATOMIC_MASK); @@ -743,32 +701,14 @@ static enum resp_states atomic_reply(struct rxe_qp *qp, } if (!res->replay) { - if (mr->state != RXE_MR_STATE_VALID) { - ret = RESPST_ERR_RKEY_VIOLATION; - goto out; - } - - vaddr = iova_to_vaddr(mr, qp->resp.va + qp->resp.offset, - sizeof(u64)); - - /* check vaddr is 8 bytes aligned. */ - if (!vaddr || (uintptr_t)vaddr & 7) { - ret = RESPST_ERR_MISALIGNED_ATOMIC; - goto out; - } - - spin_lock_bh(&atomic_ops_lock); - res->atomic.orig_val = value = *vaddr; - - if (pkt->opcode == IB_OPCODE_RC_COMPARE_SWAP) { - if (value == atmeth_comp(pkt)) - value = atmeth_swap_add(pkt); - } else { - value += atmeth_swap_add(pkt); - } + u64 iova = qp->resp.va + qp->resp.offset; - *vaddr = value; - spin_unlock_bh(&atomic_ops_lock); + err = rxe_mr_do_atomic_op(mr, iova, pkt->opcode, + atmeth_comp(pkt), + atmeth_swap_add(pkt), + &res->atomic.orig_val); + if (err) + return err; qp->resp.msn++; @@ -780,9 +720,7 @@ static enum resp_states atomic_reply(struct rxe_qp *qp, qp->resp.status = IB_WC_SUCCESS; } - ret = RESPST_ACKNOWLEDGE; -out: - return ret; + return RESPST_ACKNOWLEDGE; } #ifdef CONFIG_64BIT From patchwork Thu Jan 19 23:59:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13109002 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82CABC6379F for ; Fri, 20 Jan 2023 00:03:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229648AbjATADz (ORCPT ); Thu, 19 Jan 2023 19:03:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229721AbjATADd (ORCPT ); Thu, 19 Jan 2023 19:03:33 -0500 Received: from mail-oi1-x235.google.com (mail-oi1-x235.google.com [IPv6:2607:f8b0:4864:20::235]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 539ECA103C for ; Thu, 19 Jan 2023 16:02:54 -0800 (PST) Received: by mail-oi1-x235.google.com with SMTP id s66so3115647oib.7 for ; Thu, 19 Jan 2023 16:02:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=j2BKlIDukzXJiyTlHXLoZNDafNftANyGeYBmpVK6WjI=; b=K/X+6fXKdxT7/OYRlW1B7OXwv2U1zytFOciOsHD4FQqau810W+YHtHaOBNUpO3En4Q SFbCA6oNg8KUM5T87Utnsb8T9ZMlyViC8zjSSG+tQaNu5OVhb5TsGyrXh1rv9xS+95jX +wX+5tZ4yOxmng1qUB3F9MitqjgcNpH7kP6KOx1q9JrKUbIkJN5CqCHjbTJrdJ4yXZHZ JAdzfUsDOKAh0NpILeFYk1fGqfFy7eRFsRduXvuSz0NZc1ZAR5lX4+EXQLp/r5I8NM7h UXXcp002Z5wY6h9vwvxaFnmNfTQmKrPwpKoZlPLtcpNBlkax861TAfzi8T3H24JTmj5m tsJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=j2BKlIDukzXJiyTlHXLoZNDafNftANyGeYBmpVK6WjI=; b=3j356KBcDU+uPviYrq3NUW9TrUrbUR49THBOPZLP2VRGEM1z/Bab82uJfpWS6lSlJ8 lywsGqPrZr5bU7yeHBj1Kqm8ooYqKhs3RfZ5zIq4Xj8rVQW5rIWDeABjHvJv6JbAR/aM kI8m/WsY1D7U0xRJ9C+4/IA9J8OmlSIvBASXiXoihrl/igXIv7WUY3Kh0u1NkdsyZrhk u+6/kz+MwyXfPVfI5OVQ8kPRYvHZCIfBmqX3s3Bv99Yo/v9f6qSVvWPIFZVxvPWw4ONP AZRRyCtE1FArDRfuckYRxoRB+NALbmPffAxcV9xIqlYLdv27BgRovwLqdsmENjYSC1uW svQg== X-Gm-Message-State: AFqh2ko9DMA4xQVsj8krCn43JZ5slmuIma634ZO+bcqTtHleY3En6ecX 3r9UC7yaMUvhTqs2upSkX5I= X-Google-Smtp-Source: AMrXdXunr+ykqiuxXiI3K2+wkESotMK+pHG88Cs7lhEmlpyPNhlfxIRO6FoPEPUWdwiBKfcPQn6m5A== X-Received: by 2002:a05:6808:1889:b0:36d:ba2e:9a27 with SMTP id bi9-20020a056808188900b0036dba2e9a27mr3917159oib.34.1674172973994; Thu, 19 Jan 2023 16:02:53 -0800 (PST) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-d086-74d8-5274-c0f1.res6.spectrum.com. [2603:8081:140c:1a00:d086:74d8:5274:c0f1]) by smtp.gmail.com with ESMTPSA id bj23-20020a056808199700b0036718f58b7esm6139394oib.15.2023.01.19.16.02.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Jan 2023 16:02:53 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, leonro@nvidia.com, yangx.jy@fujitsu.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v7 4/6] RDMA-rxe: Isolate mr code from atomic_write_reply() Date: Thu, 19 Jan 2023 17:59:35 -0600 Message-Id: <20230119235936.19728-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230119235936.19728-1-rpearsonhpe@gmail.com> References: <20230119235936.19728-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Isolate mr specific code from atomic_write_reply() in rxe_resp.c into a subroutine rxe_mr_do_atomic_write() in rxe_mr.c. Check length for atomic write operation. Make iova_to_vaddr() static. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 2 +- drivers/infiniband/sw/rxe/rxe_mr.c | 38 ++++++++++++++++- drivers/infiniband/sw/rxe/rxe_resp.c | 61 ++++++++++------------------ 3 files changed, 59 insertions(+), 42 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index bcb1bbcf50df..ad8bd6bd728a 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -71,9 +71,9 @@ int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, void *addr, int length, enum rxe_mr_copy_dir dir); int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset); -void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length); int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode, u64 compare, u64 swap_add, u64 *orig_val); +int rxe_mr_do_atomic_write(struct rxe_mr *mr, u64 iova, u64 value); struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type); int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index df9741474f1f..5c4ce43914fa 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -297,7 +297,7 @@ static void lookup_iova(struct rxe_mr *mr, u64 iova, int *m_out, int *n_out, } } -void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length) +static void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length) { size_t offset; int m, n; @@ -565,6 +565,42 @@ int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode, return 0; } +/* only implemented for 64 bit architectures */ +#if defined CONFIG_64BIT +int rxe_mr_do_atomic_write(struct rxe_mr *mr, u64 iova, u64 value) +{ + u64 *va; + + /* See IBA oA19-28 */ + if (unlikely(mr->state != RXE_MR_STATE_VALID)) { + rxe_dbg_mr(mr, "mr not in valid state"); + return RESPST_ERR_RKEY_VIOLATION; + } + + va = iova_to_vaddr(mr, iova, sizeof(value)); + if (unlikely(!va)) { + rxe_dbg_mr(mr, "iova out of range"); + return RESPST_ERR_RKEY_VIOLATION; + } + + /* See IBA A19.4.2 */ + if (unlikely((uintptr_t)va & 0x7 || iova & 0x7)) { + rxe_dbg_mr(mr, "misaligned address"); + return RESPST_ERR_MISALIGNED_ATOMIC; + } + + /* Do atomic write after all prior operations have completed */ + smp_store_release(va, value); + + return 0; +} +#else +int rxe_mr_do_atomic_write(struct rxe_mr *mr, u64 iova, u64 value) +{ + return RESPST_ERR_UNSUPPORTED_OPCODE; +} +#endif + int advance_dma_data(struct rxe_dma_info *dma, unsigned int length) { struct rxe_sge *sge = &dma->sge[dma->cur_sge]; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 9d4b4e9b42fc..cd2d88de287c 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -723,30 +723,32 @@ static enum resp_states atomic_reply(struct rxe_qp *qp, return RESPST_ACKNOWLEDGE; } -#ifdef CONFIG_64BIT -static enum resp_states do_atomic_write(struct rxe_qp *qp, - struct rxe_pkt_info *pkt) +static enum resp_states atomic_write_reply(struct rxe_qp *qp, + struct rxe_pkt_info *pkt) { - struct rxe_mr *mr = qp->resp.mr; - int payload = payload_size(pkt); - u64 src, *dst; - - if (mr->state != RXE_MR_STATE_VALID) - return RESPST_ERR_RKEY_VIOLATION; + struct resp_res *res = qp->resp.res; + struct rxe_mr *mr; + u64 value; + u64 iova; + int err; - memcpy(&src, payload_addr(pkt), payload); + if (!res) { + res = rxe_prepare_res(qp, pkt, RXE_ATOMIC_WRITE_MASK); + qp->resp.res = res; + } - dst = iova_to_vaddr(mr, qp->resp.va + qp->resp.offset, payload); - /* check vaddr is 8 bytes aligned. */ - if (!dst || (uintptr_t)dst & 7) - return RESPST_ERR_MISALIGNED_ATOMIC; + if (res->replay) + return RESPST_ACKNOWLEDGE; - /* Do atomic write after all prior operations have completed */ - smp_store_release(dst, src); + mr = qp->resp.mr; + value = *(u64 *)payload_addr(pkt); + iova = qp->resp.va + qp->resp.offset; - /* decrease resp.resid to zero */ - qp->resp.resid -= sizeof(payload); + err = rxe_mr_do_atomic_write(mr, iova, value); + if (err) + return err; + qp->resp.resid = 0; qp->resp.msn++; /* next expected psn, read handles this separately */ @@ -755,29 +757,8 @@ static enum resp_states do_atomic_write(struct rxe_qp *qp, qp->resp.opcode = pkt->opcode; qp->resp.status = IB_WC_SUCCESS; - return RESPST_ACKNOWLEDGE; -} -#else -static enum resp_states do_atomic_write(struct rxe_qp *qp, - struct rxe_pkt_info *pkt) -{ - return RESPST_ERR_UNSUPPORTED_OPCODE; -} -#endif /* CONFIG_64BIT */ - -static enum resp_states atomic_write_reply(struct rxe_qp *qp, - struct rxe_pkt_info *pkt) -{ - struct resp_res *res = qp->resp.res; - if (!res) { - res = rxe_prepare_res(qp, pkt, RXE_ATOMIC_WRITE_MASK); - qp->resp.res = res; - } - - if (res->replay) - return RESPST_ACKNOWLEDGE; - return do_atomic_write(qp, pkt); + return RESPST_ACKNOWLEDGE; } static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, From patchwork Thu Jan 19 23:59:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13109001 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 242BFC678D6 for ; Fri, 20 Jan 2023 00:03:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229571AbjATADy (ORCPT ); Thu, 19 Jan 2023 19:03:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229735AbjATADe (ORCPT ); Thu, 19 Jan 2023 19:03:34 -0500 Received: from mail-oi1-x230.google.com (mail-oi1-x230.google.com [IPv6:2607:f8b0:4864:20::230]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0149C9F38C for ; Thu, 19 Jan 2023 16:02:55 -0800 (PST) Received: by mail-oi1-x230.google.com with SMTP id p185so3140065oif.2 for ; Thu, 19 Jan 2023 16:02:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZG0QqZ7ZAm6Lf63Li3g6bNV+qsi7M+DPASLbxcDW0S4=; b=Jf4S5GYuaaapH71bs9OPO6WPN26pDd7bswQtL7T4Ruqeqd3rsOvS/ECZr9FOOYDLNC hU+J23tfzCq/cPZm7lrWA1Wy3N96kJqgsLKbI1bgOp7e67oCxrFkRpinLPDR8qxU4PmS V0jzx2WGz7m0ZzMFtDfHtrptjMSCskrNVvtnQ2jcJKyVAGeFVT5auoVOWXuzbrvBGF8E 2otlmhlidq/DDNljK+sHPc0HLXKe0nletWtYVtcOVD947tvVt0ipoay3xGTKSBpsCuM9 BmYJwUz0JUHdUSjl9dVMWA4uETO+I8xxfk9yHgUVRw8rem5ItEXXAmUdLQiv0s+AjLmp NgpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZG0QqZ7ZAm6Lf63Li3g6bNV+qsi7M+DPASLbxcDW0S4=; b=bqgLE5JTvtodn6l8ersLY/liw6sVCT76np/h1SjY/S5DdxHDVArFY+OPuZo5SvYaUg P9CwD8+H8OgaV6dl2c5DtIUbkKpyhFPZWQadoitrS05nEbQph6HK9jzV03bi4QmURkv5 r2VCc0z/C2WqBqswAIWduK4CvbKMU38C2TVIA1e3xIx+FicVg+YElW026ME3vhgjoBuK jRqbrLGOZYKrDpmuPzZfBDTLWkUt4A5IqeL698gWaNHJVlxNZ8gh5smY8YcuJUgCP77v nlAXh/bsNuwy9BZsomlktqof5D5RtsLkSD66HjN5MbTx8oU6v7M0LxVawufzNsyIUa6E mOqQ== X-Gm-Message-State: AFqh2kobyT+6VSG3SC7w0GGNyA2b9wKjRphZI7Zs77lRPukEiqq5j3nL D8/Um+2RWfPhgMACQccLGXE= X-Google-Smtp-Source: AMrXdXsOD/h5FNSUsxl/nnAYPPGQX2t0ANO5RBHvYgRHkAggS1k4NiE2FL5EfL6IZyiWDgZ8/QhDDQ== X-Received: by 2002:a54:4810:0:b0:364:cdd4:19d2 with SMTP id j16-20020a544810000000b00364cdd419d2mr6018048oij.15.1674172975333; Thu, 19 Jan 2023 16:02:55 -0800 (PST) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-d086-74d8-5274-c0f1.res6.spectrum.com. [2603:8081:140c:1a00:d086:74d8:5274:c0f1]) by smtp.gmail.com with ESMTPSA id bj23-20020a056808199700b0036718f58b7esm6139394oib.15.2023.01.19.16.02.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Jan 2023 16:02:54 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, leonro@nvidia.com, yangx.jy@fujitsu.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v7 5/6] RDMA/rxe: Cleanup page variables in rxe_mr.c Date: Thu, 19 Jan 2023 17:59:36 -0600 Message-Id: <20230119235936.19728-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230119235936.19728-1-rpearsonhpe@gmail.com> References: <20230119235936.19728-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Cleanup usage of mr->page_shift and mr->page_mask and introduce an extractor for mr->ibmr.page_size. Normal usage in the kernel has page_mask masking out offset in page rather than masking out the page number. The rxe driver had reversed that which was confusing. Implicitly there can be a per mr page_size which was not uniformly supported. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mr.c | 31 ++++++++++++--------------- drivers/infiniband/sw/rxe/rxe_verbs.h | 11 +++++++--- 2 files changed, 22 insertions(+), 20 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 5c4ce43914fa..2181165ea40d 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -62,6 +62,9 @@ static void rxe_mr_init(int access, struct rxe_mr *mr) mr->lkey = mr->ibmr.lkey = lkey; mr->rkey = mr->ibmr.rkey = rkey; + mr->ibmr.page_size = PAGE_SIZE; + mr->page_mask = PAGE_MASK; + mr->page_shift = PAGE_SHIFT; mr->state = RXE_MR_STATE_INVALID; } @@ -151,9 +154,6 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, goto err_release_umem; } - mr->page_shift = PAGE_SHIFT; - mr->page_mask = PAGE_SIZE - 1; - num_buf = 0; map = mr->map; if (length > 0) { @@ -182,7 +182,7 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, goto err_release_umem; } buf->addr = (uintptr_t)vaddr; - buf->size = PAGE_SIZE; + buf->size = mr_page_size(mr); num_buf++; buf++; @@ -191,10 +191,9 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, mr->umem = umem; mr->access = access; - mr->offset = ib_umem_offset(umem); + mr->page_offset = ib_umem_offset(umem); mr->state = RXE_MR_STATE_VALID; mr->ibmr.type = IB_MR_TYPE_USER; - mr->ibmr.page_size = PAGE_SIZE; return 0; @@ -248,29 +247,27 @@ int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset) { struct rxe_mr *mr = to_rmr(ibmr); - int n; - - mr->nbuf = 0; + unsigned int page_size = mr_page_size(mr); - n = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, rxe_set_page); + mr->page_shift = ilog2(page_size); + mr->page_mask = ~((u64)page_size - 1); + mr->page_offset = ibmr->iova & (page_size - 1); - mr->page_shift = ilog2(ibmr->page_size); - mr->page_mask = ibmr->page_size - 1; - mr->offset = ibmr->iova & mr->page_mask; + mr->nbuf = 0; - return n; + return ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, rxe_set_page); } static void lookup_iova(struct rxe_mr *mr, u64 iova, int *m_out, int *n_out, size_t *offset_out) { - size_t offset = iova - mr->ibmr.iova + mr->offset; + size_t offset = iova - mr->ibmr.iova + mr->page_offset; int map_index; int buf_index; u64 length; if (likely(mr->page_shift)) { - *offset_out = offset & mr->page_mask; + *offset_out = offset & (mr_page_size(mr) - 1); offset >>= mr->page_shift; *n_out = offset & mr->map_mask; *m_out = offset >> mr->map_shift; @@ -329,7 +326,7 @@ int rxe_flush_pmem_iova(struct rxe_mr *mr, u64 iova, int length) if (mr->ibmr.type == IB_MR_TYPE_DMA) return -EFAULT; - offset = (iova - mr->ibmr.iova + mr->offset) & mr->page_mask; + offset = (iova - mr->ibmr.iova + mr->page_offset) & mr->page_mask; while (length > 0) { u8 *va; int bytes; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 19ddfa890480..bfc94caaeec5 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -310,11 +310,11 @@ struct rxe_mr { u32 lkey; u32 rkey; enum rxe_mr_state state; - u32 offset; int access; - int page_shift; - int page_mask; + unsigned int page_offset; + unsigned int page_shift; + u64 page_mask; int map_shift; int map_mask; @@ -329,6 +329,11 @@ struct rxe_mr { struct rxe_map **map; }; +static inline unsigned int mr_page_size(struct rxe_mr *mr) +{ + return mr ? mr->ibmr.page_size : PAGE_SIZE; +} + enum rxe_mw_state { RXE_MW_STATE_INVALID = RXE_MR_STATE_INVALID, RXE_MW_STATE_FREE = RXE_MR_STATE_FREE, From patchwork Thu Jan 19 23:59:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13109003 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 356F4C678D4 for ; Fri, 20 Jan 2023 00:03:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230091AbjATAD5 (ORCPT ); Thu, 19 Jan 2023 19:03:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55738 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229781AbjATADe (ORCPT ); Thu, 19 Jan 2023 19:03:34 -0500 Received: from mail-oi1-x229.google.com (mail-oi1-x229.google.com [IPv6:2607:f8b0:4864:20::229]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D2C0A19B8 for ; Thu, 19 Jan 2023 16:02:57 -0800 (PST) Received: by mail-oi1-x229.google.com with SMTP id p133so3107974oig.8 for ; Thu, 19 Jan 2023 16:02:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=v8s4ig8J9YCG+2uAXY9gVXDe1odWmUMCNiUDfhOW5Rw=; b=A+GB1z/gbkyx17t4nrA3ffP9DlChsVY7tLt53WpfHdPYQP68jQ0RhnGe4HVBktNWrO 5jFbVNItCgs8b/yY864S9MfF/xSyy5ZYRa6FjDvi0dymSaqSODwn4AQwM7PyJCJBSHLt Urk1V4e8uslC13W3R/JPgbhFg9K6uMsFxIM1z2t+ajchTIyKFK2csVei+KxJdGRsa/Fg 2tZJmBUvOyQsWE77zLCGqFO8HgirLQwmluQZbH5GUz3yXqFwYLExbbCedE6ZTrJ3UdY5 S7NouVRZrG/NnpTPN1Tr+OnPQzMYk0+7s1VD3WLQ8W29svzFx6IayZvbs+VYy+7JrWsS T/tQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=v8s4ig8J9YCG+2uAXY9gVXDe1odWmUMCNiUDfhOW5Rw=; b=dY2mPR1dm3ejNzpx9qV7Dj1jrXT4Yo6tYYaAmRsbf7OxbkLGpaDclgWc/thPV3GbiS eJuG8tp9uVD2OTfZ5xRUSy/Sfbz1plCyRemnq13MWovutwuRjrEgaNT/7vhpovuUmUlS G1YoJGJJaBDRr4GOwA7XDAJtWZytyc2bgHziWyJ09NAKLpvUBGeJlXe1T9/3wNrmI1I8 kGL5tnHtmEuYeNyks/TwSpjXYCqsbWn2h4VUPd096dWqA2wRtIlOYPCXsCuCSHqfaDr7 G8xfVr4z/oH/INyjKqO07Uzm1ObumoABROPEqOLE2fdf8ouWEfgh8nKKQi0AKxjGDr0/ gnsw== X-Gm-Message-State: AFqh2krUPkLowFgGS/Npd4wtSwzlXyssQ02Rv/3UkzNbhKBhwmEuAKq6 8XfSWre1w6RQ+MskPeXeInuZjUraq7MMSA== X-Google-Smtp-Source: AMrXdXvcWFrZUaCTnA0PjxwhUAFqL9moMgVYoNcfhyJxqxR3AmojwY9Uma6nMfbwFgeYJ8Cwjybjxg== X-Received: by 2002:aca:2b19:0:b0:35a:536c:3e65 with SMTP id i25-20020aca2b19000000b0035a536c3e65mr5731575oik.35.1674172976394; Thu, 19 Jan 2023 16:02:56 -0800 (PST) Received: from rpearson-X570-AORUS-PRO-WIFI.tx.rr.com (2603-8081-140c-1a00-d086-74d8-5274-c0f1.res6.spectrum.com. [2603:8081:140c:1a00:d086:74d8:5274:c0f1]) by smtp.gmail.com with ESMTPSA id bj23-20020a056808199700b0036718f58b7esm6139394oib.15.2023.01.19.16.02.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Jan 2023 16:02:55 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, leonro@nvidia.com, yangx.jy@fujitsu.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v7 6/6] RDMA/rxe: Replace rxe_map and rxe_phys_buf by xarray Date: Thu, 19 Jan 2023 17:59:37 -0600 Message-Id: <20230119235936.19728-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230119235936.19728-1-rpearsonhpe@gmail.com> References: <20230119235936.19728-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Replace struct rxe-phys_buf and struct rxe_map by struct xarray in rxe_verbs.h. This allows using rcu locking on reads for the memory maps stored in each mr. This is based off of a sketch of a patch from Jason Gunthorpe in the link below. Some changes were needed to make this work. It applies cleanly to the current for-next and passes the pyverbs, perftest and the same blktests test cases which run today. Link: https://lore.kernel.org/linux-rdma/Y3gvZr6%2FNCii9Avy@nvidia.com/ Co-developed-by: Jason Gunthorpe Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 6 +- drivers/infiniband/sw/rxe/rxe_mr.c | 537 ++++++++++++-------------- drivers/infiniband/sw/rxe/rxe_verbs.h | 21 +- 3 files changed, 262 insertions(+), 302 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index ad8bd6bd728a..1bb0cb479eb1 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -64,9 +64,9 @@ void rxe_mr_init_dma(int access, struct rxe_mr *mr); int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, int access, struct rxe_mr *mr); int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr); -int rxe_flush_pmem_iova(struct rxe_mr *mr, u64 iova, int length); -int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, - enum rxe_mr_copy_dir dir); +int rxe_flush_pmem_iova(struct rxe_mr *mr, u64 iova, unsigned int length); +int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, + unsigned int length, enum rxe_mr_copy_dir dir); int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, void *addr, int length, enum rxe_mr_copy_dir dir); int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 2181165ea40d..efdc2ab02a91 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -62,60 +62,31 @@ static void rxe_mr_init(int access, struct rxe_mr *mr) mr->lkey = mr->ibmr.lkey = lkey; mr->rkey = mr->ibmr.rkey = rkey; + mr->access = access; mr->ibmr.page_size = PAGE_SIZE; mr->page_mask = PAGE_MASK; mr->page_shift = PAGE_SHIFT; mr->state = RXE_MR_STATE_INVALID; } -static int rxe_mr_alloc(struct rxe_mr *mr, int num_buf) -{ - int i; - int num_map; - struct rxe_map **map = mr->map; - - num_map = (num_buf + RXE_BUF_PER_MAP - 1) / RXE_BUF_PER_MAP; - - mr->map = kmalloc_array(num_map, sizeof(*map), GFP_KERNEL); - if (!mr->map) - goto err1; - - for (i = 0; i < num_map; i++) { - mr->map[i] = kmalloc(sizeof(**map), GFP_KERNEL); - if (!mr->map[i]) - goto err2; - } - - BUILD_BUG_ON(!is_power_of_2(RXE_BUF_PER_MAP)); - - mr->map_shift = ilog2(RXE_BUF_PER_MAP); - mr->map_mask = RXE_BUF_PER_MAP - 1; - - mr->num_buf = num_buf; - mr->num_map = num_map; - mr->max_buf = num_map * RXE_BUF_PER_MAP; - - return 0; - -err2: - for (i--; i >= 0; i--) - kfree(mr->map[i]); - - kfree(mr->map); - mr->map = NULL; -err1: - return -ENOMEM; -} - void rxe_mr_init_dma(int access, struct rxe_mr *mr) { rxe_mr_init(access, mr); - mr->access = access; mr->state = RXE_MR_STATE_VALID; mr->ibmr.type = IB_MR_TYPE_DMA; } +static unsigned long rxe_mr_iova_to_index(struct rxe_mr *mr, u64 iova) +{ + return (iova >> mr->page_shift) - (mr->ibmr.iova >> mr->page_shift); +} + +static unsigned long rxe_mr_iova_to_page_offset(struct rxe_mr *mr, u64 iova) +{ + return iova & (mr_page_size(mr) - 1); +} + static bool is_pmem_page(struct page *pg) { unsigned long paddr = page_to_phys(pg); @@ -125,82 +96,97 @@ static bool is_pmem_page(struct page *pg) IORES_DESC_PERSISTENT_MEMORY); } +static int rxe_mr_fill_pages_from_sgt(struct rxe_mr *mr, struct sg_table *sgt) +{ + XA_STATE(xas, &mr->page_list, 0); + struct sg_page_iter sg_iter; + struct page *page; + bool persistent = !!(mr->access & IB_ACCESS_FLUSH_PERSISTENT); + + __sg_page_iter_start(&sg_iter, sgt->sgl, sgt->orig_nents, 0); + if (!__sg_page_iter_next(&sg_iter)) + return 0; + + do { + xas_lock(&xas); + while (true) { + page = sg_page_iter_page(&sg_iter); + + if (persistent && !is_pmem_page(page)) { + rxe_dbg_mr(mr, "Page can't be persistent\n"); + return -EINVAL; + } + + xas_store(&xas, page); + if (xas_error(&xas)) + break; + xas_next(&xas); + if (!__sg_page_iter_next(&sg_iter)) + break; + } + xas_unlock(&xas); + } while (xas_nomem(&xas, GFP_KERNEL)); + + return xas_error(&xas); +} + int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, int access, struct rxe_mr *mr) { - struct rxe_map **map; - struct rxe_phys_buf *buf = NULL; - struct ib_umem *umem; - struct sg_page_iter sg_iter; - int num_buf; - void *vaddr; + struct ib_umem *umem; int err; + rxe_mr_init(access, mr); + + xa_init(&mr->page_list); + umem = ib_umem_get(&rxe->ib_dev, start, length, access); if (IS_ERR(umem)) { rxe_dbg_mr(mr, "Unable to pin memory region err = %d\n", (int)PTR_ERR(umem)); - err = PTR_ERR(umem); - goto err_out; + return PTR_ERR(umem); } - num_buf = ib_umem_num_pages(umem); - - rxe_mr_init(access, mr); - - err = rxe_mr_alloc(mr, num_buf); + err = rxe_mr_fill_pages_from_sgt(mr, &umem->sgt_append.sgt); if (err) { - rxe_dbg_mr(mr, "Unable to allocate memory for map\n"); - goto err_release_umem; + ib_umem_release(umem); + return err; } - num_buf = 0; - map = mr->map; - if (length > 0) { - bool persistent_access = access & IB_ACCESS_FLUSH_PERSISTENT; - - buf = map[0]->buf; - for_each_sgtable_page (&umem->sgt_append.sgt, &sg_iter, 0) { - struct page *pg = sg_page_iter_page(&sg_iter); + mr->umem = umem; + mr->ibmr.type = IB_MR_TYPE_USER; + mr->state = RXE_MR_STATE_VALID; - if (persistent_access && !is_pmem_page(pg)) { - rxe_dbg_mr(mr, "Unable to register persistent access to non-pmem device\n"); - err = -EINVAL; - goto err_release_umem; - } + return 0; +} - if (num_buf >= RXE_BUF_PER_MAP) { - map++; - buf = map[0]->buf; - num_buf = 0; - } +static int rxe_mr_alloc(struct rxe_mr *mr, int num_buf) +{ + XA_STATE(xas, &mr->page_list, 0); + int i = 0; + int err; - vaddr = page_address(pg); - if (!vaddr) { - rxe_dbg_mr(mr, "Unable to get virtual address\n"); - err = -ENOMEM; - goto err_release_umem; - } - buf->addr = (uintptr_t)vaddr; - buf->size = mr_page_size(mr); - num_buf++; - buf++; + xa_init(&mr->page_list); + do { + xas_lock(&xas); + while (i != num_buf) { + xas_store(&xas, XA_ZERO_ENTRY); + if (xas_error(&xas)) + break; + xas_next(&xas); + i++; } - } + xas_unlock(&xas); + } while (xas_nomem(&xas, GFP_KERNEL)); - mr->umem = umem; - mr->access = access; - mr->page_offset = ib_umem_offset(umem); - mr->state = RXE_MR_STATE_VALID; - mr->ibmr.type = IB_MR_TYPE_USER; + err = xas_error(&xas); + if (err) + return err; - return 0; + mr->num_buf = num_buf; -err_release_umem: - ib_umem_release(umem); -err_out: - return err; + return 0; } int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr) @@ -214,7 +200,6 @@ int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr) if (err) goto err1; - mr->max_buf = max_pages; mr->state = RXE_MR_STATE_FREE; mr->ibmr.type = IB_MR_TYPE_MEM_REG; @@ -224,206 +209,129 @@ int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr) return err; } -static int rxe_set_page(struct ib_mr *ibmr, u64 addr) +static int rxe_set_page(struct ib_mr *ibmr, u64 iova) { struct rxe_mr *mr = to_rmr(ibmr); - struct rxe_map *map; - struct rxe_phys_buf *buf; + struct page *page = virt_to_page(iova & mr->page_mask); + XA_STATE(xas, &mr->page_list, mr->nbuf); + bool persistent = !!(mr->access & IB_ACCESS_FLUSH_PERSISTENT); + int err; + + if (persistent && !is_pmem_page(page)) { + rxe_dbg_mr(mr, "Page cannot be persistent\n"); + return -EINVAL; + } if (unlikely(mr->nbuf == mr->num_buf)) return -ENOMEM; - map = mr->map[mr->nbuf / RXE_BUF_PER_MAP]; - buf = &map->buf[mr->nbuf % RXE_BUF_PER_MAP]; + do { + xas_lock(&xas); + xas_store(&xas, page); + xas_unlock(&xas); + } while (xas_nomem(&xas, GFP_KERNEL)); - buf->addr = addr; - buf->size = ibmr->page_size; - mr->nbuf++; + err = xas_error(&xas); + if (err) + return err; + mr->nbuf++; return 0; } -int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, +int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sgl, int sg_nents, unsigned int *sg_offset) { struct rxe_mr *mr = to_rmr(ibmr); unsigned int page_size = mr_page_size(mr); + mr->nbuf = 0; mr->page_shift = ilog2(page_size); mr->page_mask = ~((u64)page_size - 1); - mr->page_offset = ibmr->iova & (page_size - 1); - - mr->nbuf = 0; + mr->page_offset = mr->ibmr.iova & (page_size - 1); - return ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, rxe_set_page); + return ib_sg_to_pages(ibmr, sgl, sg_nents, sg_offset, rxe_set_page); } -static void lookup_iova(struct rxe_mr *mr, u64 iova, int *m_out, int *n_out, - size_t *offset_out) +static int rxe_mr_copy_xarray(struct rxe_mr *mr, u64 iova, void *addr, + unsigned int length, enum rxe_mr_copy_dir dir) { - size_t offset = iova - mr->ibmr.iova + mr->page_offset; - int map_index; - int buf_index; - u64 length; - - if (likely(mr->page_shift)) { - *offset_out = offset & (mr_page_size(mr) - 1); - offset >>= mr->page_shift; - *n_out = offset & mr->map_mask; - *m_out = offset >> mr->map_shift; - } else { - map_index = 0; - buf_index = 0; - - length = mr->map[map_index]->buf[buf_index].size; - - while (offset >= length) { - offset -= length; - buf_index++; + unsigned int page_offset = rxe_mr_iova_to_page_offset(mr, iova); + unsigned long index = rxe_mr_iova_to_index(mr, iova); + unsigned int bytes; + struct page *page; + void *va; - if (buf_index == RXE_BUF_PER_MAP) { - map_index++; - buf_index = 0; - } - length = mr->map[map_index]->buf[buf_index].size; - } + while (length) { + page = xa_load(&mr->page_list, index); + if (!page) + return -EFAULT; - *m_out = map_index; - *n_out = buf_index; - *offset_out = offset; + bytes = min_t(unsigned int, length, + mr_page_size(mr) - page_offset); + va = kmap_local_page(page); + if (dir == RXE_FROM_MR_OBJ) + memcpy(addr, va + page_offset, bytes); + else + memcpy(va + page_offset, addr, bytes); + kunmap_local(va); + + page_offset = 0; + addr += bytes; + length -= bytes; + index++; } -} - -static void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length) -{ - size_t offset; - int m, n; - - if (mr->state != RXE_MR_STATE_VALID) - return NULL; - - if (mr->ibmr.type == IB_MR_TYPE_DMA) - return (void *)(uintptr_t)iova; - - if (mr_check_range(mr, iova, length)) - return NULL; - - lookup_iova(mr, iova, &m, &n, &offset); - if (offset + length > mr->map[m]->buf[n].size) - return NULL; - - return (void *)(uintptr_t)mr->map[m]->buf[n].addr + offset; + return 0; } -int rxe_flush_pmem_iova(struct rxe_mr *mr, u64 iova, int length) +static void rxe_mr_copy_dma(struct rxe_mr *mr, u64 iova, void *addr, + unsigned int length, enum rxe_mr_copy_dir dir) { - size_t offset; + unsigned int page_offset = iova & (PAGE_SIZE - 1); + unsigned int bytes; + struct page *page; + u8 *va; - if (length == 0) - return 0; - - if (mr->ibmr.type == IB_MR_TYPE_DMA) - return -EFAULT; - - offset = (iova - mr->ibmr.iova + mr->page_offset) & mr->page_mask; - while (length > 0) { - u8 *va; - int bytes; - - bytes = mr->ibmr.page_size - offset; - if (bytes > length) - bytes = length; - - va = iova_to_vaddr(mr, iova, length); - if (!va) - return -EFAULT; - - arch_wb_cache_pmem(va, bytes); - - length -= bytes; + while (length) { + page = virt_to_page(iova & mr->page_mask); + bytes = min_t(unsigned int, length, + PAGE_SIZE - page_offset); + va = kmap_local_page(page); + + if (dir == RXE_TO_MR_OBJ) + memcpy(va + page_offset, addr, bytes); + else + memcpy(addr, va + page_offset, bytes); + + kunmap_local(va); + page_offset = 0; iova += bytes; - offset = 0; + addr += bytes; + length -= bytes; } - - return 0; } -/* copy data from a range (vaddr, vaddr+length-1) to or from - * a mr object starting at iova. - */ -int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, - enum rxe_mr_copy_dir dir) +int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, + unsigned int length, enum rxe_mr_copy_dir dir) { - int err; - int bytes; - u8 *va; - struct rxe_map **map; - struct rxe_phys_buf *buf; - int m; - int i; - size_t offset; + int err; if (length == 0) return 0; if (mr->ibmr.type == IB_MR_TYPE_DMA) { - u8 *src, *dest; - - src = (dir == RXE_TO_MR_OBJ) ? addr : ((void *)(uintptr_t)iova); - - dest = (dir == RXE_TO_MR_OBJ) ? ((void *)(uintptr_t)iova) : addr; - - memcpy(dest, src, length); - + rxe_mr_copy_dma(mr, iova, addr, length, dir); return 0; } - WARN_ON_ONCE(!mr->map); - err = mr_check_range(mr, iova, length); - if (err) { - err = -EFAULT; - goto err1; - } - - lookup_iova(mr, iova, &m, &i, &offset); - - map = mr->map + m; - buf = map[0]->buf + i; - - while (length > 0) { - u8 *src, *dest; - - va = (u8 *)(uintptr_t)buf->addr + offset; - src = (dir == RXE_TO_MR_OBJ) ? addr : va; - dest = (dir == RXE_TO_MR_OBJ) ? va : addr; - - bytes = buf->size - offset; - - if (bytes > length) - bytes = length; - - memcpy(dest, src, bytes); - - length -= bytes; - addr += bytes; - - offset = 0; - buf++; - i++; - - if (i == RXE_BUF_PER_MAP) { - i = 0; - map++; - buf = map[0]->buf; - } + if (unlikely(err)) { + rxe_dbg_mr(mr, "iova out of range"); + return err; } - return 0; - -err1: - return err; + return rxe_mr_copy_xarray(mr, iova, addr, length, dir); } /* copy data in or out of a wqe, i.e. sg list @@ -495,7 +403,6 @@ int copy_data( if (bytes > 0) { iova = sge->addr + offset; - err = rxe_mr_copy(mr, iova, addr, bytes, dir); if (err) goto err2; @@ -522,50 +429,113 @@ int copy_data( return err; } +int rxe_flush_pmem_iova(struct rxe_mr *mr, u64 iova, unsigned int length) +{ + unsigned int page_offset; + unsigned long index; + struct page *page; + unsigned int bytes; + int err; + u8 *va; + + if (length == 0) + return 0; + + if (mr->ibmr.type == IB_MR_TYPE_DMA) + return -EFAULT; + + err = mr_check_range(mr, iova, length); + if (err) + return err; + + while (length > 0) { + index = rxe_mr_iova_to_index(mr, iova); + page = xa_load(&mr->page_list, index); + page_offset = rxe_mr_iova_to_page_offset(mr, iova); + if (!page) + return -EFAULT; + bytes = min_t(unsigned int, length, + mr_page_size(mr) - page_offset); + + va = kmap_local_page(page); + if (!va) + return -EFAULT; + + arch_wb_cache_pmem(va + page_offset, bytes); + + length -= bytes; + iova += bytes; + page_offset = 0; + } + + return 0; +} + /* Guarantee atomicity of atomic operations at the machine level. */ static DEFINE_SPINLOCK(atomic_ops_lock); int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode, u64 compare, u64 swap_add, u64 *orig_val) { - u64 *va; + unsigned int page_offset; + struct page *page; u64 value; + u64 *va; - if (mr->state != RXE_MR_STATE_VALID) { + if (unlikely(mr->state != RXE_MR_STATE_VALID)) { rxe_dbg_mr(mr, "mr not in valid state"); return RESPST_ERR_RKEY_VIOLATION; } - va = iova_to_vaddr(mr, iova, sizeof(u64)); - if (!va) { - rxe_dbg_mr(mr, "iova out of range"); - return RESPST_ERR_RKEY_VIOLATION; + if (mr->ibmr.type == IB_MR_TYPE_DMA) { + page_offset = iova & (PAGE_SIZE - 1); + page = virt_to_page(iova & PAGE_MASK); + } else { + unsigned long index; + int err; + + err = mr_check_range(mr, iova, sizeof(value)); + if (err) { + rxe_dbg_mr(mr, "iova out of range"); + return RESPST_ERR_RKEY_VIOLATION; + } + page_offset = rxe_mr_iova_to_page_offset(mr, iova); + index = rxe_mr_iova_to_index(mr, iova); + page = xa_load(&mr->page_list, index); + if (!page) + return RESPST_ERR_RKEY_VIOLATION; } - if ((uintptr_t)va & 0x7) { + if (unlikely(page_offset & 0x7)) { rxe_dbg_mr(mr, "iova not aligned"); return RESPST_ERR_MISALIGNED_ATOMIC; } + va = kmap_local_page(page); + spin_lock_bh(&atomic_ops_lock); - value = *orig_val = *va; + value = *orig_val = va[page_offset >> 3]; if (opcode == IB_OPCODE_RC_COMPARE_SWAP) { if (value == compare) - *va = swap_add; + va[page_offset >> 3] = swap_add; } else { value += swap_add; - *va = value; + va[page_offset >> 3] = value; } spin_unlock_bh(&atomic_ops_lock); + kunmap_local(va); + return 0; } -/* only implemented for 64 bit architectures */ #if defined CONFIG_64BIT +/* only implemented or called for 64 bit architectures */ int rxe_mr_do_atomic_write(struct rxe_mr *mr, u64 iova, u64 value) { + unsigned int page_offset; + struct page *page; u64 *va; /* See IBA oA19-28 */ @@ -574,20 +544,38 @@ int rxe_mr_do_atomic_write(struct rxe_mr *mr, u64 iova, u64 value) return RESPST_ERR_RKEY_VIOLATION; } - va = iova_to_vaddr(mr, iova, sizeof(value)); - if (unlikely(!va)) { - rxe_dbg_mr(mr, "iova out of range"); - return RESPST_ERR_RKEY_VIOLATION; + if (mr->ibmr.type == IB_MR_TYPE_DMA) { + page_offset = iova & (PAGE_SIZE - 1); + page = virt_to_page(iova & PAGE_MASK); + } else { + unsigned long index; + int err; + + /* See IBA oA19-28 */ + err = mr_check_range(mr, iova, sizeof(value)); + if (unlikely(err)) { + rxe_dbg_mr(mr, "iova out of range"); + return RESPST_ERR_RKEY_VIOLATION; + } + page_offset = rxe_mr_iova_to_page_offset(mr, iova); + index = rxe_mr_iova_to_index(mr, iova); + page = xa_load(&mr->page_list, index); + if (!page) + return RESPST_ERR_RKEY_VIOLATION; } /* See IBA A19.4.2 */ - if (unlikely((uintptr_t)va & 0x7 || iova & 0x7)) { + if (unlikely(page_offset & 0x7)) { rxe_dbg_mr(mr, "misaligned address"); return RESPST_ERR_MISALIGNED_ATOMIC; } + va = kmap_local_page(page); + /* Do atomic write after all prior operations have completed */ - smp_store_release(va, value); + smp_store_release(&va[page_offset >> 3], value); + + kunmap_local(va); return 0; } @@ -631,12 +619,6 @@ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length) return 0; } -/* (1) find the mr corresponding to lkey/rkey - * depending on lookup_type - * (2) verify that the (qp) pd matches the mr pd - * (3) verify that the mr can support the requested access - * (4) verify that mr state is valid - */ struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type) { @@ -757,15 +739,10 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) void rxe_mr_cleanup(struct rxe_pool_elem *elem) { struct rxe_mr *mr = container_of(elem, typeof(*mr), elem); - int i; rxe_put(mr_pd(mr)); ib_umem_release(mr->umem); - if (mr->map) { - for (i = 0; i < mr->num_map; i++) - kfree(mr->map[i]); - - kfree(mr->map); - } + if (mr->ibmr.type != IB_MR_TYPE_DMA) + xa_destroy(&mr->page_list); } diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index bfc94caaeec5..c269ae2a3224 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -283,17 +283,6 @@ enum rxe_mr_lookup_type { RXE_LOOKUP_REMOTE, }; -#define RXE_BUF_PER_MAP (PAGE_SIZE / sizeof(struct rxe_phys_buf)) - -struct rxe_phys_buf { - u64 addr; - u64 size; -}; - -struct rxe_map { - struct rxe_phys_buf buf[RXE_BUF_PER_MAP]; -}; - static inline int rkey_is_mw(u32 rkey) { u32 index = rkey >> 8; @@ -311,22 +300,16 @@ struct rxe_mr { u32 rkey; enum rxe_mr_state state; int access; + atomic_t num_mw; unsigned int page_offset; unsigned int page_shift; u64 page_mask; - int map_shift; - int map_mask; u32 num_buf; u32 nbuf; - u32 max_buf; - u32 num_map; - - atomic_t num_mw; - - struct rxe_map **map; + struct xarray page_list; }; static inline unsigned int mr_page_size(struct rxe_mr *mr)