From patchwork Tue Jul 26 07:19:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Guralnik X-Patchwork-Id: 12928864 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03CB6C43334 for ; Tue, 26 Jul 2022 07:19:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237922AbiGZHTw (ORCPT ); Tue, 26 Jul 2022 03:19:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237619AbiGZHTp (ORCPT ); Tue, 26 Jul 2022 03:19:45 -0400 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2056.outbound.protection.outlook.com [40.107.220.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E37FE2AC75; Tue, 26 Jul 2022 00:19:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Xwg/1o7DwOOFncz2GYtNgpaRVSA7GBaPX4W4R2S7pbx7v7QDrigPvFswUSmIuhctu93uaFCptqEH2mLKMXCk4sS9/c05EJDI99PbZI9MtlVO7VlsJe525p0b5ujQH3pfX2/Gorf0W+X9pvw7KobTeseHVqI1qb1KTw+PxTjeQK+75h39kK8eyMC3LdwK3x0byiNiEgnGUoBVMabpq66QH8mGdaTm3oZBwKuAM7kYEudC+qCfBqRNd6gRtQMBIjO+NbN1ZvLoq9LMlMtAtVYSDjO2M04C6PJTd7BsR/Gt/oxaOEFrd5QWAwczPM1S+02zkBPnPkc7X1kKvI8TqJMGNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=k2v/soaFwnXnvLWr3HQtOPjXt9UgP3OMYV8aOZiEbW8=; b=NW8ofvxsinFuTBikmaVnAuanavUI2SdQG9cdv1RnJeWseF4K69IeFoLD8sIWe8On8yiNZZ78QLjEf57akY28C81TlxURynFFRZOyKFr8bzoO7c4624wW+lpdV67HHouIMc8Jm01weAhpKos2ny5hQdaENZBD1Kr8WxIa2gAP0mAF6QO++BhvoU//G3j9YiIdg1W+tCK/oawGrEtEpdfwxTWIVVmg5dbj9OGOLhPN5dHA6IKJyZucqu4FZcrheHNM5VrEDTrQqMtxdMpxtfNH5B83tLKnd4b8GoLH3GM47Nnm6U0rAS09QGtF5EIhC6FRn2yZKL/324kv6H/F7kPkiw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=k2v/soaFwnXnvLWr3HQtOPjXt9UgP3OMYV8aOZiEbW8=; b=TufyLt7HXDvmaSyFwgVw58eeyPRPDB2kBGwy5YpZ4U6bj8MUkQM/MhkPsK2x8/v3sa2DwIBlcQA/nFanKkJG5iFillEBXTIvDGBZ4a30DEQczr6CJL+F99s0rqdNai9o7LeSkI9k3EpdbcTvubqTDHzLPywsQoItNW4WgY+gdGUHO1aTWWNwSP2W99hN1DksXG3R6Ow5Bk7vZ6HVlaNyvRhlh2zuFlE4cM4MFrD3nLx1A24QntQX/cEitr0Jk1mW4qqdGYaS+gCdSkUKpxiOfIadzLqFROLg8Tx+NgO5xqmtlyMkXmlDzr5ak2jvs+awWatzh2QBc5bUH+FLj/Q95Q== Received: from BN0PR10CA0008.namprd10.prod.outlook.com (2603:10b6:408:143::27) by CH2PR12MB4823.namprd12.prod.outlook.com (2603:10b6:610:11::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5458.19; Tue, 26 Jul 2022 07:19:41 +0000 Received: from BN8NAM11FT031.eop-nam11.prod.protection.outlook.com (2603:10b6:408:143:cafe::12) by BN0PR10CA0008.outlook.office365.com (2603:10b6:408:143::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5458.21 via Frontend Transport; Tue, 26 Jul 2022 07:19:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT031.mail.protection.outlook.com (10.13.177.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5458.17 via Frontend Transport; Tue, 26 Jul 2022 07:19:41 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Tue, 26 Jul 2022 07:19:40 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Tue, 26 Jul 2022 00:19:39 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.986.26 via Frontend Transport; Tue, 26 Jul 2022 00:19:37 -0700 From: Michael Guralnik To: CC: , , , , , , Aharon Landau Subject: [PATCH rdma-next v1 1/5] RDMA/mlx5: Replace ent->lock with xa_lock Date: Tue, 26 Jul 2022 10:19:07 +0300 Message-ID: <20220726071911.122765-2-michaelgur@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220726071911.122765-1-michaelgur@nvidia.com> References: <20220726071911.122765-1-michaelgur@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2759f53b-e5ed-471f-a804-08da6ed73534 X-MS-TrafficTypeDiagnostic: CH2PR12MB4823:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 7isSpNeZ3lagpEvtjBkYfjpXiBdLC2pQw2TZ+xBVaH+ehkx8zN/D9dyEX7FSkA9Z/X0Oj+TyW9pH8TFnpNz+qz00zlJGNN+vLh/6D0QyxuXGirDRFEa5y9xvwJberIt1f+nKM1t9HHUN3nj6P/Wq1DQlSuBkC36edvrphWU4Fxr767Vzr3pyh0z+fDBusYBpmszCbK5gyktvaHVUMsxetPxwXew9MObPM61svnGQJGP+hnhgM7bJkiS9xXQOfUKHudjVk//FVrEh3c22gjsWML2OscEZp0O8pQHuUeX24rP2o4BpXOkRSeniXlMppgPUZgw8ixd8ONKYlPx9JU/07VBnujOUJbYblZeZXYfobi7XfltWtZ/CfUPOIkEahMzakUPI/6b++a05/otHtHzLAQ2bC1PWTF53rtfJ5Okj7Gxt2AJMtSPMNjQhb+CmmClwdeIERiLG71QvQ4JWm0ACkxr99R2FGPstfy0i0kTAf+efIhTjJIVZLLEOSdw1ceynvILx7k0IkC4AueqB22vWqPvXEIL3bY6yyUXHwjFy8oDqyd4FPouNJoJ0qePb1kvlPAvy6oiFzbE0Za5zUmwdkTiv7OVXjZQZoaC0E/gp6jFaxsatINv2kmH8KyMInJkKGSDHoNrqEtvhbEml++QQA1NMM3SW8+INxMC2Db9Y8Shn//fCzt2oZ4fIhAzvyHzfr0l9HMMJIYCnsPWTIbqYAogRIL3gSGAImBSJxqIB5hO5SVPjGEG6tRzsuYgp1a8xLuunDpCpKL757SAV0BRFh2HjcqZ3aZis1HVVUO8tav3lv5w/+dSEhbv29IWxP1YOqbQ33NeC9YFVeKAn0RpzZA== X-Forefront-Antispam-Report: CIP:12.22.5.235;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(346002)(136003)(376002)(396003)(39860400002)(40470700004)(46966006)(36840700001)(4326008)(36860700001)(54906003)(81166007)(70586007)(8676002)(70206006)(6636002)(37006003)(82740400003)(356005)(316002)(478600001)(82310400005)(26005)(6666004)(86362001)(2906002)(41300700001)(426003)(7696005)(83380400001)(47076005)(40460700003)(40480700001)(8936002)(36756003)(6862004)(1076003)(5660300002)(2616005)(30864003)(107886003)(336012)(450100002)(186003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jul 2022 07:19:41.3514 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2759f53b-e5ed-471f-a804-08da6ed73534 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.235];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT031.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4823 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Aharon Landau In the next patch, ent->list will be replaced with an xarray. The xarray uses an internal lock to protect the indexes. Use it to protect all the entry fields, and get rid of ent->lock. Signed-off-by: Aharon Landau Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/mlx5_ib.h | 5 +- drivers/infiniband/hw/mlx5/mr.c | 92 ++++++++++++++-------------- 2 files changed, 47 insertions(+), 50 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 688ee7c05a8f..42bc58967b1f 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -743,11 +743,8 @@ struct umr_common { }; struct mlx5_cache_ent { + struct xarray mkeys; struct list_head head; - /* sync access to the cahce entry - */ - spinlock_t lock; - char name[4]; u32 order; diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index aedfd7ff4846..d56e7ff74b98 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -153,10 +153,10 @@ static void create_mkey_callback(int status, struct mlx5_async_work *context) if (status) { create_mkey_warn(dev, status, mr->out); kfree(mr); - spin_lock_irqsave(&ent->lock, flags); + xa_lock_irqsave(&ent->mkeys, flags); ent->pending--; WRITE_ONCE(dev->fill_delay, 1); - spin_unlock_irqrestore(&ent->lock, flags); + xa_unlock_irqrestore(&ent->mkeys, flags); mod_timer(&dev->delay_timer, jiffies + HZ); return; } @@ -168,14 +168,14 @@ static void create_mkey_callback(int status, struct mlx5_async_work *context) WRITE_ONCE(dev->cache.last_add, jiffies); - spin_lock_irqsave(&ent->lock, flags); + xa_lock_irqsave(&ent->mkeys, flags); list_add_tail(&mr->list, &ent->head); ent->available_mrs++; ent->total_mrs++; /* If we are doing fill_to_high_water then keep going. */ queue_adjust_cache_locked(ent); ent->pending--; - spin_unlock_irqrestore(&ent->lock, flags); + xa_unlock_irqrestore(&ent->mkeys, flags); } static int get_mkc_octo_size(unsigned int access_mode, unsigned int ndescs) @@ -239,23 +239,23 @@ static int add_keys(struct mlx5_cache_ent *ent, unsigned int num) err = -ENOMEM; break; } - spin_lock_irq(&ent->lock); + xa_lock_irq(&ent->mkeys); if (ent->pending >= MAX_PENDING_REG_MR) { err = -EAGAIN; - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); kfree(mr); break; } ent->pending++; - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); err = mlx5_ib_create_mkey_cb(ent->dev, &mr->mmkey, &ent->dev->async_ctx, in, inlen, mr->out, sizeof(mr->out), &mr->cb_work); if (err) { - spin_lock_irq(&ent->lock); + xa_lock_irq(&ent->mkeys); ent->pending--; - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); mlx5_ib_warn(ent->dev, "create mkey failed %d\n", err); kfree(mr); break; @@ -293,9 +293,9 @@ static struct mlx5_ib_mr *create_cache_mr(struct mlx5_cache_ent *ent) init_waitqueue_head(&mr->mmkey.wait); mr->mmkey.type = MLX5_MKEY_MR; WRITE_ONCE(ent->dev->cache.last_add, jiffies); - spin_lock_irq(&ent->lock); + xa_lock_irq(&ent->mkeys); ent->total_mrs++; - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); kfree(in); return mr; free_mr: @@ -309,17 +309,17 @@ static void remove_cache_mr_locked(struct mlx5_cache_ent *ent) { struct mlx5_ib_mr *mr; - lockdep_assert_held(&ent->lock); + lockdep_assert_held(&ent->mkeys.xa_lock); if (list_empty(&ent->head)) return; mr = list_first_entry(&ent->head, struct mlx5_ib_mr, list); list_del(&mr->list); ent->available_mrs--; ent->total_mrs--; - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); mlx5_core_destroy_mkey(ent->dev->mdev, mr->mmkey.key); kfree(mr); - spin_lock_irq(&ent->lock); + xa_lock_irq(&ent->mkeys); } static int resize_available_mrs(struct mlx5_cache_ent *ent, unsigned int target, @@ -327,7 +327,7 @@ static int resize_available_mrs(struct mlx5_cache_ent *ent, unsigned int target, { int err; - lockdep_assert_held(&ent->lock); + lockdep_assert_held(&ent->mkeys.xa_lock); while (true) { if (limit_fill) @@ -337,11 +337,11 @@ static int resize_available_mrs(struct mlx5_cache_ent *ent, unsigned int target, if (target > ent->available_mrs + ent->pending) { u32 todo = target - (ent->available_mrs + ent->pending); - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); err = add_keys(ent, todo); if (err == -EAGAIN) usleep_range(3000, 5000); - spin_lock_irq(&ent->lock); + xa_lock_irq(&ent->mkeys); if (err) { if (err != -EAGAIN) return err; @@ -369,7 +369,7 @@ static ssize_t size_write(struct file *filp, const char __user *buf, * cannot free MRs that are in use. Compute the target value for * available_mrs. */ - spin_lock_irq(&ent->lock); + xa_lock_irq(&ent->mkeys); if (target < ent->total_mrs - ent->available_mrs) { err = -EINVAL; goto err_unlock; @@ -382,12 +382,12 @@ static ssize_t size_write(struct file *filp, const char __user *buf, err = resize_available_mrs(ent, target, false); if (err) goto err_unlock; - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); return count; err_unlock: - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); return err; } @@ -427,10 +427,10 @@ static ssize_t limit_write(struct file *filp, const char __user *buf, * Upon set we immediately fill the cache to high water mark implied by * the limit. */ - spin_lock_irq(&ent->lock); + xa_lock_irq(&ent->mkeys); ent->limit = var; err = resize_available_mrs(ent, 0, true); - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); if (err) return err; return count; @@ -465,9 +465,9 @@ static bool someone_adding(struct mlx5_mr_cache *cache) struct mlx5_cache_ent *ent = &cache->ent[i]; bool ret; - spin_lock_irq(&ent->lock); + xa_lock_irq(&ent->mkeys); ret = ent->available_mrs < ent->limit; - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); if (ret) return true; } @@ -481,7 +481,7 @@ static bool someone_adding(struct mlx5_mr_cache *cache) */ static void queue_adjust_cache_locked(struct mlx5_cache_ent *ent) { - lockdep_assert_held(&ent->lock); + lockdep_assert_held(&ent->mkeys.xa_lock); if (ent->disabled || READ_ONCE(ent->dev->fill_delay)) return; @@ -514,16 +514,16 @@ static void __cache_work_func(struct mlx5_cache_ent *ent) struct mlx5_mr_cache *cache = &dev->cache; int err; - spin_lock_irq(&ent->lock); + xa_lock_irq(&ent->mkeys); if (ent->disabled) goto out; if (ent->fill_to_high_water && ent->available_mrs + ent->pending < 2 * ent->limit && !READ_ONCE(dev->fill_delay)) { - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); err = add_keys(ent, 1); - spin_lock_irq(&ent->lock); + xa_lock_irq(&ent->mkeys); if (ent->disabled) goto out; if (err) { @@ -556,11 +556,11 @@ static void __cache_work_func(struct mlx5_cache_ent *ent) * the garbage collection work to try to run in next cycle, in * order to free CPU resources to other tasks. */ - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); need_delay = need_resched() || someone_adding(cache) || !time_after(jiffies, READ_ONCE(cache->last_add) + 300 * HZ); - spin_lock_irq(&ent->lock); + xa_lock_irq(&ent->mkeys); if (ent->disabled) goto out; if (need_delay) { @@ -571,7 +571,7 @@ static void __cache_work_func(struct mlx5_cache_ent *ent) queue_adjust_cache_locked(ent); } out: - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); } static void delayed_cache_work_func(struct work_struct *work) @@ -592,11 +592,11 @@ struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, if (!mlx5r_umr_can_reconfig(dev, 0, access_flags)) return ERR_PTR(-EOPNOTSUPP); - spin_lock_irq(&ent->lock); + xa_lock_irq(&ent->mkeys); if (list_empty(&ent->head)) { queue_adjust_cache_locked(ent); ent->miss++; - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); mr = create_cache_mr(ent); if (IS_ERR(mr)) return mr; @@ -605,7 +605,7 @@ struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, list_del(&mr->list); ent->available_mrs--; queue_adjust_cache_locked(ent); - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); mlx5_clear_mr(mr); } @@ -617,11 +617,11 @@ static void mlx5_mr_cache_free(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr) struct mlx5_cache_ent *ent = mr->cache_ent; WRITE_ONCE(dev->cache.last_add, jiffies); - spin_lock_irq(&ent->lock); + xa_lock_irq(&ent->mkeys); list_add_tail(&mr->list, &ent->head); ent->available_mrs++; queue_adjust_cache_locked(ent); - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); } static void clean_keys(struct mlx5_ib_dev *dev, int c) @@ -634,16 +634,16 @@ static void clean_keys(struct mlx5_ib_dev *dev, int c) cancel_delayed_work(&ent->dwork); while (1) { - spin_lock_irq(&ent->lock); + xa_lock_irq(&ent->mkeys); if (list_empty(&ent->head)) { - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); break; } mr = list_first_entry(&ent->head, struct mlx5_ib_mr, list); list_move(&mr->list, &del_list); ent->available_mrs--; ent->total_mrs--; - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); mlx5_core_destroy_mkey(dev->mdev, mr->mmkey.key); } @@ -710,7 +710,7 @@ int mlx5_mr_cache_init(struct mlx5_ib_dev *dev) for (i = 0; i < MAX_MR_CACHE_ENTRIES; i++) { ent = &cache->ent[i]; INIT_LIST_HEAD(&ent->head); - spin_lock_init(&ent->lock); + xa_init_flags(&ent->mkeys, XA_FLAGS_LOCK_IRQ); ent->order = i + 2; ent->dev = dev; ent->limit = 0; @@ -734,9 +734,9 @@ int mlx5_mr_cache_init(struct mlx5_ib_dev *dev) ent->limit = dev->mdev->profile.mr_cache[i].limit; else ent->limit = 0; - spin_lock_irq(&ent->lock); + xa_lock_irq(&ent->mkeys); queue_adjust_cache_locked(ent); - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); } mlx5_mr_cache_debugfs_init(dev); @@ -754,9 +754,9 @@ int mlx5_mr_cache_cleanup(struct mlx5_ib_dev *dev) for (i = 0; i < MAX_MR_CACHE_ENTRIES; i++) { struct mlx5_cache_ent *ent = &dev->cache.ent[i]; - spin_lock_irq(&ent->lock); + xa_lock_irq(&ent->mkeys); ent->disabled = true; - spin_unlock_irq(&ent->lock); + xa_unlock_irq(&ent->mkeys); cancel_delayed_work_sync(&ent->dwork); } @@ -1572,9 +1572,9 @@ int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) /* Stop DMA */ if (mr->cache_ent) { if (mlx5r_umr_revoke_mr(mr)) { - spin_lock_irq(&mr->cache_ent->lock); + xa_lock_irq(&mr->cache_ent->mkeys); mr->cache_ent->total_mrs--; - spin_unlock_irq(&mr->cache_ent->lock); + xa_unlock_irq(&mr->cache_ent->mkeys); mr->cache_ent = NULL; } } From patchwork Tue Jul 26 07:19:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Guralnik X-Patchwork-Id: 12928865 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8F5AC433EF for ; Tue, 26 Jul 2022 07:19:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237963AbiGZHTz (ORCPT ); Tue, 26 Jul 2022 03:19:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237944AbiGZHTw (ORCPT ); Tue, 26 Jul 2022 03:19:52 -0400 Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2048.outbound.protection.outlook.com [40.107.212.48]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B6BB32AE02; Tue, 26 Jul 2022 00:19:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ChCpU6tUe6g3YH6HabIJACVcUs91hAXatVex01y1Jkyupyp/LLHnH9+acbAQtG30zL/lkfizUb+T/H9E+/EG2Q308CL8SH6l7wxdTG0W4k0qNTKVM9eEqTi2FoMJqUPTsTWt1/I1Lx54jK4nv5JVMpemR+YA1pMoiHmDNMPVY+K64TSM7skunxgWuT3jqtCd/1mCA9wQDoErpeptgFCi+U7ocbW/B0QmCn2yVSy+yck9JL2pgiqUWBTgVedjWkQfPujujCchVIRHZkSRW0LywoGMYOLSOLvNsOGufhWedkWwcf/igcHjbZdRWUGkRLH6vs1KSq9UX0zlI9e05ORZ/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XBp8/4m2ZQGDRhrNf+DBH0cNtF7zmattYj+SrTpTIl4=; b=dQ0AM9OjZTxzcQmDjEkJ/VocJCbqkt9Bg/IfRjJldGvqw6qa6zEYjUWI0YC2tBoDMsYgDgdecIhT37FPa5RARWDz6yFkFtPmTvRz9lVYMs6KwTa4LIyznrTrI4ATbvVeNLuPnp/ih8EibZYf2l21nsri9Qoen3hbCX0WMe9BPTeZx6V3XpR5JBaOlJOolLkbQS2XyFTECUTOwgTsBpuQuGgelcTBSLlmc+nkL+x+5Vk//bOWSHaKRxAKZ2tPd/YvFv4x/1ucGmWDeVYETra5ftmGZP6Y5hvWZp4LZcxhVs/0kFVcAOyb2C7nYazy+NNwXhA6DOReQcbbxm4vr9OunA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XBp8/4m2ZQGDRhrNf+DBH0cNtF7zmattYj+SrTpTIl4=; b=jeWlII0uXJ0VhZi0A5hcE6eZ95VST4R8AvH3fSYYwlroPs1L39Do7H4/GSKO6KP6uEnOWhapeAYx/MVoMSTJ0LQuWAC8JvRPgnZH6Ba61DwKVoalHy62scPAtfNkhFGjY0fAqkYOeCIVEX0ootPzMOfht8LrTDQIU1bXawlLNUXXC+jc+gYZPTvloUpU6my8DkaZCS9oq5+sRaOAYEjrLTTEG2OfRfJCa8bxXduo8LMOkuhJAoY3X5iTDhe7MDrX9q/ToEmbsGiVSaVCeqZHa2i6t825pvHU9oQnFNpe60xtRNICemRBprKG1+Uozo9dFiSd+A7wKRWhPptLsEFtOw== Received: from BN1PR14CA0016.namprd14.prod.outlook.com (2603:10b6:408:e3::21) by BN8PR12MB3380.namprd12.prod.outlook.com (2603:10b6:408:43::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5458.19; Tue, 26 Jul 2022 07:19:47 +0000 Received: from BN8NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e3:cafe::65) by BN1PR14CA0016.outlook.office365.com (2603:10b6:408:e3::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5458.18 via Frontend Transport; Tue, 26 Jul 2022 07:19:47 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT022.mail.protection.outlook.com (10.13.176.112) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5458.17 via Frontend Transport; Tue, 26 Jul 2022 07:19:47 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Tue, 26 Jul 2022 07:19:46 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Tue, 26 Jul 2022 00:19:45 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.986.26 via Frontend Transport; Tue, 26 Jul 2022 00:19:43 -0700 From: Michael Guralnik To: CC: , , , , , , Aharon Landau Subject: [PATCH rdma-next v1 2/5] RDMA/mlx5: Replace cache list with Xarray Date: Tue, 26 Jul 2022 10:19:08 +0300 Message-ID: <20220726071911.122765-3-michaelgur@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220726071911.122765-1-michaelgur@nvidia.com> References: <20220726071911.122765-1-michaelgur@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 939c854c-b896-405a-35f7-08da6ed738ac X-MS-TrafficTypeDiagnostic: BN8PR12MB3380:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 2j4T+44RPUkJB1cDre6uS5wBLkONaywAwjxh5EB9KifVh84LyzGslahUw2Hx2v6r9xlpIx7HqTr/JxMVAFN7pnoOLZx6JhVLFVR2+LoFbV9XfG20ajYFVh/G4sirLV8nD0Qpt0/skrmvoFV827ATAGWlEZP1VV9mQQNGADT7iwqGVfC8pMZBqOtB3QfVmQDqDCjQquwwOaUUXN0AEvwCtIlsHczhY9/8HLI8U4W7r9czEileiJTeAG51HitICaHsax8yGV31xBl/xgC6x5mpJEKfV7YcKmE3MHtbuWk6Rm1F71OjSz+HXYk8QpJ9neHSFfZ2DVoNTMD7y7PhQ9Q8qqupav1WcInDKbgo4uLHDFEb2bX1nWfQaYHWiszK5kKTedKqypAdiz/SRhnfOvbwobdrnPLAJAz5FVjc8Hr2xMAVpfAf2thBj1JPXZiDfSgBnxOAkVTKlSKbbite9g5iFNUFZ0vQ6WLDn3mXluKJDK7mPlfJ+fIo8K11ifNT42OPi/nLOPxTo3PDs0Mp5vSIelzh8/zJCAu9TG3t92lF4lfVI9xEB6LpTVThXa13bW+xWIPVtk0cufPZd+BTD3a0UazIuqFiKwkkhLjMNWwcqvU1y9u+Yff3mU9bzCfZ/rdIOncPJHsv73ZRBOOyrsTmKwG6RYqHikovzuPNJ8i+l7ZRdNmUHsp55UJnEYvrfKpK8yt3iamxuuQAEwRQhOdgAay3ZuBjDY6VwhnJwV3ORr7wDImFhHKSELP95jRr/DrY4IHbk/vv/wUtqkUO9vFfMK2zO0NPfC9mnhtjDRu6AFCPnN+BFTLgT6I4D+yW+B+pQH/S5fMgpsYV1RL0bunVlA== X-Forefront-Antispam-Report: CIP:12.22.5.238;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(136003)(376002)(346002)(396003)(39860400002)(46966006)(40470700004)(36840700001)(82310400005)(36860700001)(86362001)(40460700003)(356005)(81166007)(82740400003)(8936002)(478600001)(30864003)(6862004)(5660300002)(316002)(6636002)(37006003)(54906003)(8676002)(4326008)(70206006)(70586007)(450100002)(47076005)(426003)(336012)(1076003)(107886003)(2616005)(186003)(40480700001)(83380400001)(41300700001)(6666004)(2906002)(26005)(36756003)(7696005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jul 2022 07:19:47.1837 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 939c854c-b896-405a-35f7-08da6ed738ac X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.238];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3380 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Aharon Landau The Xarray allows us to store the cached mkeys in memory efficient way. Entries are reserved in the Xarray using xa_cmpxchg before calling to the upcoming callbacks to avoid allocations in interrupt context. The xa_cmpxchg can sleep when using GFP_KERNEL, so we call it in a loop to ensure one reserved entry for each process trying to reserve. Signed-off-by: Aharon Landau Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/mlx5_ib.h | 14 +- drivers/infiniband/hw/mlx5/mr.c | 226 ++++++++++++++++++--------- 2 files changed, 152 insertions(+), 88 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 42bc58967b1f..e0eb666aefa1 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -651,8 +651,6 @@ struct mlx5_ib_mr { struct { u32 out[MLX5_ST_SZ_DW(create_mkey_out)]; struct mlx5_async_work cb_work; - /* Cache list element */ - struct list_head list; }; /* Used only by kernel MRs (umem == NULL) */ @@ -744,7 +742,8 @@ struct umr_common { struct mlx5_cache_ent { struct xarray mkeys; - struct list_head head; + unsigned long stored; + unsigned long reserved; char name[4]; u32 order; @@ -756,18 +755,13 @@ struct mlx5_cache_ent { u8 fill_to_high_water:1; /* - * - available_mrs is the length of list head, ie the number of MRs - * available for immediate allocation. - * - total_mrs is available_mrs plus all in use MRs that could be + * - total_mrs is stored mkeys plus all in use MRs that could be * returned to the cache. - * - limit is the low water mark for available_mrs, 2* limit is the + * - limit is the low water mark for stored mkeys, 2* limit is the * upper water mark. - * - pending is the number of MRs currently being created */ u32 total_mrs; - u32 available_mrs; u32 limit; - u32 pending; /* Statistics */ u32 miss; diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index d56e7ff74b98..cbb8882c7787 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -142,6 +142,104 @@ static void create_mkey_warn(struct mlx5_ib_dev *dev, int status, void *out) mlx5_cmd_out_err(dev->mdev, MLX5_CMD_OP_CREATE_MKEY, 0, out); } + +static int push_mkey(struct mlx5_cache_ent *ent, bool limit_pendings, + void *to_store) +{ + XA_STATE(xas, &ent->mkeys, 0); + void *curr; + + xa_lock_irq(&ent->mkeys); + if (limit_pendings && + (ent->reserved - ent->stored) > MAX_PENDING_REG_MR) { + xa_unlock_irq(&ent->mkeys); + return -EAGAIN; + } + while (1) { + /* + * This is cmpxchg (NULL, XA_ZERO_ENTRY) however this version + * doesn't transparently unlock. Instead we set the xas index to + * the current value of reserved every iteration. + */ + xas_set(&xas, ent->reserved); + curr = xas_load(&xas); + if (!curr) { + if (to_store && ent->stored == ent->reserved) + xas_store(&xas, to_store); + else + xas_store(&xas, XA_ZERO_ENTRY); + if (xas_valid(&xas)) { + ent->reserved++; + if (to_store) { + if (ent->stored != ent->reserved) + __xa_store(&ent->mkeys, + ent->stored, + to_store, + GFP_KERNEL); + ent->stored++; + queue_adjust_cache_locked(ent); + WRITE_ONCE(ent->dev->cache.last_add, + jiffies); + } + } + } + xa_unlock_irq(&ent->mkeys); + + /* + * Notice xas_nomem() must always be called as it cleans + * up any cached allocation. + */ + if (!xas_nomem(&xas, GFP_KERNEL)) + break; + xa_lock_irq(&ent->mkeys); + } + if (xas_error(&xas)) + return xas_error(&xas); + if (WARN_ON(curr)) + return -EINVAL; + return 0; +} + +static void undo_push_reserve_mkey(struct mlx5_cache_ent *ent) +{ + void *old; + + ent->reserved--; + old = __xa_erase(&ent->mkeys, ent->reserved); + WARN_ON(old); +} + +static void push_to_reserved(struct mlx5_cache_ent *ent, struct mlx5_ib_mr *mr) +{ + void *old; + + old = __xa_store(&ent->mkeys, ent->stored, mr, 0); + WARN_ON(old); + ent->stored++; +} + +static struct mlx5_ib_mr *pop_stored_mkey(struct mlx5_cache_ent *ent) +{ + struct mlx5_ib_mr *mr; + void *old; + + ent->stored--; + ent->reserved--; + + if (ent->stored == ent->reserved) { + mr = __xa_erase(&ent->mkeys, ent->stored); + WARN_ON(!mr); + return mr; + } + + mr = __xa_store(&ent->mkeys, ent->stored, XA_ZERO_ENTRY, + GFP_KERNEL); + WARN_ON(!mr || xa_is_err(mr)); + old = __xa_erase(&ent->mkeys, ent->reserved); + WARN_ON(old); + return mr; +} + static void create_mkey_callback(int status, struct mlx5_async_work *context) { struct mlx5_ib_mr *mr = @@ -154,7 +252,7 @@ static void create_mkey_callback(int status, struct mlx5_async_work *context) create_mkey_warn(dev, status, mr->out); kfree(mr); xa_lock_irqsave(&ent->mkeys, flags); - ent->pending--; + undo_push_reserve_mkey(ent); WRITE_ONCE(dev->fill_delay, 1); xa_unlock_irqrestore(&ent->mkeys, flags); mod_timer(&dev->delay_timer, jiffies + HZ); @@ -169,12 +267,10 @@ static void create_mkey_callback(int status, struct mlx5_async_work *context) WRITE_ONCE(dev->cache.last_add, jiffies); xa_lock_irqsave(&ent->mkeys, flags); - list_add_tail(&mr->list, &ent->head); - ent->available_mrs++; + push_to_reserved(ent, mr); ent->total_mrs++; /* If we are doing fill_to_high_water then keep going. */ queue_adjust_cache_locked(ent); - ent->pending--; xa_unlock_irqrestore(&ent->mkeys, flags); } @@ -237,31 +333,33 @@ static int add_keys(struct mlx5_cache_ent *ent, unsigned int num) mr = alloc_cache_mr(ent, mkc); if (!mr) { err = -ENOMEM; - break; + goto free_in; } - xa_lock_irq(&ent->mkeys); - if (ent->pending >= MAX_PENDING_REG_MR) { - err = -EAGAIN; - xa_unlock_irq(&ent->mkeys); - kfree(mr); - break; - } - ent->pending++; - xa_unlock_irq(&ent->mkeys); + + err = push_mkey(ent, true, NULL); + if (err) + goto free_mr; + err = mlx5_ib_create_mkey_cb(ent->dev, &mr->mmkey, &ent->dev->async_ctx, in, inlen, mr->out, sizeof(mr->out), &mr->cb_work); if (err) { - xa_lock_irq(&ent->mkeys); - ent->pending--; - xa_unlock_irq(&ent->mkeys); mlx5_ib_warn(ent->dev, "create mkey failed %d\n", err); - kfree(mr); - break; + goto err_undo_reserve; } } + kfree(in); + return 0; + +err_undo_reserve: + xa_lock_irq(&ent->mkeys); + undo_push_reserve_mkey(ent); + xa_unlock_irq(&ent->mkeys); +free_mr: + kfree(mr); +free_in: kfree(in); return err; } @@ -310,11 +408,9 @@ static void remove_cache_mr_locked(struct mlx5_cache_ent *ent) struct mlx5_ib_mr *mr; lockdep_assert_held(&ent->mkeys.xa_lock); - if (list_empty(&ent->head)) + if (!ent->stored) return; - mr = list_first_entry(&ent->head, struct mlx5_ib_mr, list); - list_del(&mr->list); - ent->available_mrs--; + mr = pop_stored_mkey(ent); ent->total_mrs--; xa_unlock_irq(&ent->mkeys); mlx5_core_destroy_mkey(ent->dev->mdev, mr->mmkey.key); @@ -324,6 +420,7 @@ static void remove_cache_mr_locked(struct mlx5_cache_ent *ent) static int resize_available_mrs(struct mlx5_cache_ent *ent, unsigned int target, bool limit_fill) + __acquires(&ent->mkeys) __releases(&ent->mkeys) { int err; @@ -332,10 +429,10 @@ static int resize_available_mrs(struct mlx5_cache_ent *ent, unsigned int target, while (true) { if (limit_fill) target = ent->limit * 2; - if (target == ent->available_mrs + ent->pending) + if (target == ent->reserved) return 0; - if (target > ent->available_mrs + ent->pending) { - u32 todo = target - (ent->available_mrs + ent->pending); + if (target > ent->reserved) { + u32 todo = target - ent->reserved; xa_unlock_irq(&ent->mkeys); err = add_keys(ent, todo); @@ -366,15 +463,15 @@ static ssize_t size_write(struct file *filp, const char __user *buf, /* * Target is the new value of total_mrs the user requests, however we - * cannot free MRs that are in use. Compute the target value for - * available_mrs. + * cannot free MRs that are in use. Compute the target value for stored + * mkeys. */ xa_lock_irq(&ent->mkeys); - if (target < ent->total_mrs - ent->available_mrs) { + if (target < ent->total_mrs - ent->stored) { err = -EINVAL; goto err_unlock; } - target = target - (ent->total_mrs - ent->available_mrs); + target = target - (ent->total_mrs - ent->stored); if (target < ent->limit || target > ent->limit*2) { err = -EINVAL; goto err_unlock; @@ -466,7 +563,7 @@ static bool someone_adding(struct mlx5_mr_cache *cache) bool ret; xa_lock_irq(&ent->mkeys); - ret = ent->available_mrs < ent->limit; + ret = ent->stored < ent->limit; xa_unlock_irq(&ent->mkeys); if (ret) return true; @@ -485,22 +582,22 @@ static void queue_adjust_cache_locked(struct mlx5_cache_ent *ent) if (ent->disabled || READ_ONCE(ent->dev->fill_delay)) return; - if (ent->available_mrs < ent->limit) { + if (ent->stored < ent->limit) { ent->fill_to_high_water = true; mod_delayed_work(ent->dev->cache.wq, &ent->dwork, 0); } else if (ent->fill_to_high_water && - ent->available_mrs + ent->pending < 2 * ent->limit) { + ent->reserved < 2 * ent->limit) { /* * Once we start populating due to hitting a low water mark * continue until we pass the high water mark. */ mod_delayed_work(ent->dev->cache.wq, &ent->dwork, 0); - } else if (ent->available_mrs == 2 * ent->limit) { + } else if (ent->stored == 2 * ent->limit) { ent->fill_to_high_water = false; - } else if (ent->available_mrs > 2 * ent->limit) { + } else if (ent->stored > 2 * ent->limit) { /* Queue deletion of excess entries */ ent->fill_to_high_water = false; - if (ent->pending) + if (ent->stored != ent->reserved) queue_delayed_work(ent->dev->cache.wq, &ent->dwork, msecs_to_jiffies(1000)); else @@ -518,8 +615,7 @@ static void __cache_work_func(struct mlx5_cache_ent *ent) if (ent->disabled) goto out; - if (ent->fill_to_high_water && - ent->available_mrs + ent->pending < 2 * ent->limit && + if (ent->fill_to_high_water && ent->reserved < 2 * ent->limit && !READ_ONCE(dev->fill_delay)) { xa_unlock_irq(&ent->mkeys); err = add_keys(ent, 1); @@ -528,8 +624,8 @@ static void __cache_work_func(struct mlx5_cache_ent *ent) goto out; if (err) { /* - * EAGAIN only happens if pending is positive, so we - * will be rescheduled from reg_mr_callback(). The only + * EAGAIN only happens if there are pending MRs, so we + * will be rescheduled when storing them. The only * failure path here is ENOMEM. */ if (err != -EAGAIN) { @@ -541,7 +637,7 @@ static void __cache_work_func(struct mlx5_cache_ent *ent) msecs_to_jiffies(1000)); } } - } else if (ent->available_mrs > 2 * ent->limit) { + } else if (ent->stored > 2 * ent->limit) { bool need_delay; /* @@ -593,7 +689,7 @@ struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, return ERR_PTR(-EOPNOTSUPP); xa_lock_irq(&ent->mkeys); - if (list_empty(&ent->head)) { + if (!ent->stored) { queue_adjust_cache_locked(ent); ent->miss++; xa_unlock_irq(&ent->mkeys); @@ -601,9 +697,7 @@ struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, if (IS_ERR(mr)) return mr; } else { - mr = list_first_entry(&ent->head, struct mlx5_ib_mr, list); - list_del(&mr->list); - ent->available_mrs--; + mr = pop_stored_mkey(ent); queue_adjust_cache_locked(ent); xa_unlock_irq(&ent->mkeys); @@ -612,45 +706,23 @@ struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, return mr; } -static void mlx5_mr_cache_free(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr) -{ - struct mlx5_cache_ent *ent = mr->cache_ent; - - WRITE_ONCE(dev->cache.last_add, jiffies); - xa_lock_irq(&ent->mkeys); - list_add_tail(&mr->list, &ent->head); - ent->available_mrs++; - queue_adjust_cache_locked(ent); - xa_unlock_irq(&ent->mkeys); -} - static void clean_keys(struct mlx5_ib_dev *dev, int c) { struct mlx5_mr_cache *cache = &dev->cache; struct mlx5_cache_ent *ent = &cache->ent[c]; - struct mlx5_ib_mr *tmp_mr; struct mlx5_ib_mr *mr; - LIST_HEAD(del_list); cancel_delayed_work(&ent->dwork); - while (1) { - xa_lock_irq(&ent->mkeys); - if (list_empty(&ent->head)) { - xa_unlock_irq(&ent->mkeys); - break; - } - mr = list_first_entry(&ent->head, struct mlx5_ib_mr, list); - list_move(&mr->list, &del_list); - ent->available_mrs--; + xa_lock_irq(&ent->mkeys); + while (ent->stored) { + mr = pop_stored_mkey(ent); ent->total_mrs--; xa_unlock_irq(&ent->mkeys); mlx5_core_destroy_mkey(dev->mdev, mr->mmkey.key); - } - - list_for_each_entry_safe(mr, tmp_mr, &del_list, list) { - list_del(&mr->list); kfree(mr); + xa_lock_irq(&ent->mkeys); } + xa_unlock_irq(&ent->mkeys); } static void mlx5_mr_cache_debugfs_cleanup(struct mlx5_ib_dev *dev) @@ -680,7 +752,7 @@ static void mlx5_mr_cache_debugfs_init(struct mlx5_ib_dev *dev) dir = debugfs_create_dir(ent->name, cache->root); debugfs_create_file("size", 0600, dir, ent, &size_fops); debugfs_create_file("limit", 0600, dir, ent, &limit_fops); - debugfs_create_u32("cur", 0400, dir, &ent->available_mrs); + debugfs_create_ulong("cur", 0400, dir, &ent->stored); debugfs_create_u32("miss", 0600, dir, &ent->miss); } } @@ -709,7 +781,6 @@ int mlx5_mr_cache_init(struct mlx5_ib_dev *dev) timer_setup(&dev->delay_timer, delay_time_func, 0); for (i = 0; i < MAX_MR_CACHE_ENTRIES; i++) { ent = &cache->ent[i]; - INIT_LIST_HEAD(&ent->head); xa_init_flags(&ent->mkeys, XA_FLAGS_LOCK_IRQ); ent->order = i + 2; ent->dev = dev; @@ -1571,7 +1642,8 @@ int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) /* Stop DMA */ if (mr->cache_ent) { - if (mlx5r_umr_revoke_mr(mr)) { + if (mlx5r_umr_revoke_mr(mr) || + push_mkey(mr->cache_ent, false, mr)) { xa_lock_irq(&mr->cache_ent->mkeys); mr->cache_ent->total_mrs--; xa_unlock_irq(&mr->cache_ent->mkeys); @@ -1595,9 +1667,7 @@ int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) mlx5_ib_free_odp_mr(mr); } - if (mr->cache_ent) { - mlx5_mr_cache_free(dev, mr); - } else { + if (!mr->cache_ent) { mlx5_free_priv_descs(mr); kfree(mr); } From patchwork Tue Jul 26 07:19:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Guralnik X-Patchwork-Id: 12928866 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0EF4C43334 for ; Tue, 26 Jul 2022 07:20:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238009AbiGZHUN (ORCPT ); Tue, 26 Jul 2022 03:20:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232099AbiGZHUC (ORCPT ); Tue, 26 Jul 2022 03:20:02 -0400 Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2070.outbound.protection.outlook.com [40.107.95.70]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C68392AC7D; Tue, 26 Jul 2022 00:19:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=X5wwVPNuoiXKHWbNc5isEbYJS4uR0UCFcgg9xfrtSnGQKxsBHz2kobNGgcy5/JZsIQRcLBwvyTPFs4MYIQyp9Rc1p021gfwyUT0FGpkKYJReOWcZJ0dBYx7jGmfCgV8rqOTZSo0sVp5KkNEDDMrmfzZbiWZDiS5aDr1lTMiLnnwVE50vplXgZ17QLsFjLnmzN4HQ8QvwkxvWMIIrnV3yNWbNpNmgDUCqb1bru0O0B1gUJTA1r5c08DNCKLbQlZDX3jb/P36HGydvCYCzvQQdieq+dh+LyOh0EvrTDmfqjjC0AfzxEnxTE5RusfLbG/bU+0yWFil8IQztC1bO6cFaWg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bFe/3ybdedcZJKYDQCc33Bbp11JqfAKe/5fUCb6v/I0=; b=h6YJNT9GMKxeA41cUIAyjcm9o7VBchGkJ+8fsPIkQhJtNMRcCyYrlIlQDvRfndhLc4H+0J8kIeuxS2Ff4uk+qX2+11TibUq2x6ForIXAFZl3glzrnDjWxTHFEFnZcOZpT5jWeuduotjuw8m9xnG7OL/+XeTa/nJCDthtBT8NVvMagC9awdOx9XmgMx/8sm9gHopl4rO0j/ZLaBHYi4qeVSJD9fApXU2oSh2Eo7cERxUvuF5FIc2YT/9QLwQmH0UJ1fYEuXobLREvIZko0gWXMvgGOXvBkcagNRVeeAUVkA0s1EJ4EqlI4HtQjMKDgLBWuihtGzAk1rADQzfdv/VMJA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=bFe/3ybdedcZJKYDQCc33Bbp11JqfAKe/5fUCb6v/I0=; b=DKRz+8F3uAWiUPrhZdmrL1EdxIGj/y3XgWoZCETm3MvCg3dNYI40kMYVcq8HRnzdJNV3462olKDgk3XziqSM3Afs6Ur5lDfxQjrT2VHNIYil71/pX58Y/HgfvRRBSPZAkLwdNv7K9rygIfUIKefPVB7dwGZiubtSn+5xas/kiNhrxQXr6AHXVWfFen47PCgZdDVV2mJFnl9Bd1qalj+bcqBjQNIfpfPKSSVWd9wMw0DFzCdYnf/Cv8LLxSSNpCRF7LtToaogUWQWBX/BryMeJqJeMASU+T67kxZRlHFCazPsl7vOMGTZqG9Ehc6D8hU8GymlMu0Z7Zo8ae3sp/1rLA== Received: from BN7PR06CA0045.namprd06.prod.outlook.com (2603:10b6:408:34::22) by BL3PR12MB6548.namprd12.prod.outlook.com (2603:10b6:208:38f::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5458.25; Tue, 26 Jul 2022 07:19:51 +0000 Received: from BN8NAM11FT011.eop-nam11.prod.protection.outlook.com (2603:10b6:408:34:cafe::23) by BN7PR06CA0045.outlook.office365.com (2603:10b6:408:34::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5458.19 via Frontend Transport; Tue, 26 Jul 2022 07:19:51 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.234) by BN8NAM11FT011.mail.protection.outlook.com (10.13.176.140) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5458.17 via Frontend Transport; Tue, 26 Jul 2022 07:19:51 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Tue, 26 Jul 2022 07:19:50 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Tue, 26 Jul 2022 00:19:49 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.986.26 via Frontend Transport; Tue, 26 Jul 2022 00:19:47 -0700 From: Michael Guralnik To: CC: , , , , , , Aharon Landau Subject: [PATCH rdma-next v1 3/5] RDMA/mlx5: Store the number of in_use cache mkeys instead of total_mrs Date: Tue, 26 Jul 2022 10:19:09 +0300 Message-ID: <20220726071911.122765-4-michaelgur@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220726071911.122765-1-michaelgur@nvidia.com> References: <20220726071911.122765-1-michaelgur@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 10ba0d28-7a0b-4ac5-38fc-08da6ed73b10 X-MS-TrafficTypeDiagnostic: BL3PR12MB6548:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: sr8DCISHJF63HCSTQF5ZYGkf/swUX6MCm9IHvjrImVrFkci7mKBc4+XGEZ+yXQfRUMtJhkOyo8PnRwk01TLWBEA3E+9DUZ+X5d7kCehei52sYeqyeRp4bnIaS3lCfqoSw0JGGysJ4hgH6qMBH6S1GgLTnQKdt2LlUEUdosLeFsSVArDjles4SZVfs3X0mv/gZXewxX/cK0M6H1MN7hc+PDkwUQPIqRMeSqZnwGicC/VwPMQAnFojOVAdn6V73RGRJKB3M3sbqXN/vjf9lzJgZE7O7Fh3WqsZCHhzEAotTd2trvIja3u/EvU5V8UklP8GY9ZIs8S1bB22lzC6d0mZv5zzAJB9KQmXBBUxS5MLI3nAd7//oN+OSSrTdAjUYKSGN9NV1FOp//2SWYsjbaE0SV4FhQeFbGo+27CzgL406hOP38+HT+y0tDJfjDrnLzgdav8vJ8DRbWjBZQlrkWaCSux8d3U011oDEFARbNAhOca7cpJExGNta/6KleeJHdRA/QLP8KAshAOXQFplLYSpuV9rAtAQiZQERzZArQoSa3TRhTkGmwHbeMS6RAvieQ3OJUcQKQSS7UOO6pV1KQ2mm0IIXjiWfK2Y08bba2l52sZndNz6+BrAfH3GC/5XHTOUnbkUJXcubGHflUxfyGEbodP0UunflcWdTazCP9fJc7wQTyfz2zrrHGVnpL81QRxwnJ5oMcXJVy32ak7RkdFpV+sCsX+Gb1wFzGBrT00KwqPnmNwGqPPsLMGoA2vZfJVIcnDmR8ePHn2VKvCkuEPtHRXCIXXta3ROXZ0fkAWgfzsNmqVaSJQfJzv+f3EmWfGBPwLfMsRKhDN6tVy37SN3NQ== X-Forefront-Antispam-Report: CIP:12.22.5.234;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(396003)(346002)(136003)(376002)(39860400002)(40470700004)(46966006)(36840700001)(186003)(1076003)(107886003)(2616005)(40480700001)(8676002)(4326008)(70586007)(70206006)(54906003)(6636002)(83380400001)(37006003)(336012)(426003)(47076005)(450100002)(36756003)(7696005)(41300700001)(6666004)(2906002)(26005)(40460700003)(86362001)(36860700001)(82310400005)(6862004)(5660300002)(316002)(81166007)(82740400003)(356005)(478600001)(8936002)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jul 2022 07:19:51.2570 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 10ba0d28-7a0b-4ac5-38fc-08da6ed73b10 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.234];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT011.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL3PR12MB6548 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Aharon Landau total_mrs is used only to calculate the number of mkeys currently in use. To simplify things, replace it with a new member called "in_use" and directly store the number of mkeys currently in use. Signed-off-by: Aharon Landau Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/mlx5_ib.h | 4 +--- drivers/infiniband/hw/mlx5/mr.c | 30 ++++++++++++++-------------- 2 files changed, 16 insertions(+), 18 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index e0eb666aefa1..da9202f4b5f3 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -755,12 +755,10 @@ struct mlx5_cache_ent { u8 fill_to_high_water:1; /* - * - total_mrs is stored mkeys plus all in use MRs that could be - * returned to the cache. * - limit is the low water mark for stored mkeys, 2* limit is the * upper water mark. */ - u32 total_mrs; + u32 in_use; u32 limit; /* Statistics */ diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index cbb8882c7787..26bfdbba24b4 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -268,7 +268,6 @@ static void create_mkey_callback(int status, struct mlx5_async_work *context) xa_lock_irqsave(&ent->mkeys, flags); push_to_reserved(ent, mr); - ent->total_mrs++; /* If we are doing fill_to_high_water then keep going. */ queue_adjust_cache_locked(ent); xa_unlock_irqrestore(&ent->mkeys, flags); @@ -391,9 +390,6 @@ static struct mlx5_ib_mr *create_cache_mr(struct mlx5_cache_ent *ent) init_waitqueue_head(&mr->mmkey.wait); mr->mmkey.type = MLX5_MKEY_MR; WRITE_ONCE(ent->dev->cache.last_add, jiffies); - xa_lock_irq(&ent->mkeys); - ent->total_mrs++; - xa_unlock_irq(&ent->mkeys); kfree(in); return mr; free_mr: @@ -411,7 +407,6 @@ static void remove_cache_mr_locked(struct mlx5_cache_ent *ent) if (!ent->stored) return; mr = pop_stored_mkey(ent); - ent->total_mrs--; xa_unlock_irq(&ent->mkeys); mlx5_core_destroy_mkey(ent->dev->mdev, mr->mmkey.key); kfree(mr); @@ -467,11 +462,11 @@ static ssize_t size_write(struct file *filp, const char __user *buf, * mkeys. */ xa_lock_irq(&ent->mkeys); - if (target < ent->total_mrs - ent->stored) { + if (target < ent->in_use) { err = -EINVAL; goto err_unlock; } - target = target - (ent->total_mrs - ent->stored); + target = target - ent->in_use; if (target < ent->limit || target > ent->limit*2) { err = -EINVAL; goto err_unlock; @@ -495,7 +490,7 @@ static ssize_t size_read(struct file *filp, char __user *buf, size_t count, char lbuf[20]; int err; - err = snprintf(lbuf, sizeof(lbuf), "%d\n", ent->total_mrs); + err = snprintf(lbuf, sizeof(lbuf), "%ld\n", ent->stored + ent->in_use); if (err < 0) return err; @@ -689,13 +684,19 @@ struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, return ERR_PTR(-EOPNOTSUPP); xa_lock_irq(&ent->mkeys); + ent->in_use++; + if (!ent->stored) { queue_adjust_cache_locked(ent); ent->miss++; xa_unlock_irq(&ent->mkeys); mr = create_cache_mr(ent); - if (IS_ERR(mr)) + if (IS_ERR(mr)) { + xa_lock_irq(&ent->mkeys); + ent->in_use--; + xa_unlock_irq(&ent->mkeys); return mr; + } } else { mr = pop_stored_mkey(ent); queue_adjust_cache_locked(ent); @@ -716,7 +717,6 @@ static void clean_keys(struct mlx5_ib_dev *dev, int c) xa_lock_irq(&ent->mkeys); while (ent->stored) { mr = pop_stored_mkey(ent); - ent->total_mrs--; xa_unlock_irq(&ent->mkeys); mlx5_core_destroy_mkey(dev->mdev, mr->mmkey.key); kfree(mr); @@ -1642,13 +1642,13 @@ int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) /* Stop DMA */ if (mr->cache_ent) { + xa_lock_irq(&mr->cache_ent->mkeys); + mr->cache_ent->in_use--; + xa_unlock_irq(&mr->cache_ent->mkeys); + if (mlx5r_umr_revoke_mr(mr) || - push_mkey(mr->cache_ent, false, mr)) { - xa_lock_irq(&mr->cache_ent->mkeys); - mr->cache_ent->total_mrs--; - xa_unlock_irq(&mr->cache_ent->mkeys); + push_mkey(mr->cache_ent, false, mr)) mr->cache_ent = NULL; - } } if (!mr->cache_ent) { rc = destroy_mkey(to_mdev(mr->ibmr.device), mr); From patchwork Tue Jul 26 07:19:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Guralnik X-Patchwork-Id: 12928868 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43B89C433EF for ; Tue, 26 Jul 2022 07:21:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230339AbiGZHVI (ORCPT ); Tue, 26 Jul 2022 03:21:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238003AbiGZHUT (ORCPT ); Tue, 26 Jul 2022 03:20:19 -0400 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2054.outbound.protection.outlook.com [40.107.244.54]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 82662C15; Tue, 26 Jul 2022 00:20:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZOD1UndmeTGTk9hgFyivuVF5S7Oe4keZXZQZZMkR0V2ccNKRL9C2VT+gmSh6RqZ3WgAayKwXAsjTweLSBmuyHpr/MPeAhcK9e7IhoA7BqdZD+TafEeC82PX8u504ILUG08wJSuQU7BLz/nVr3aW8qV/gi/E2QAHIV0OWoWcVAO5D11Fjc1QKxQXTmoTKchT1ndDAY1/Wd40DNMJZWX5ss/oxqjiUwcqP/+2ogSgyje4jK6snZ/xUXaddFQMCnzH3b9oddnNGSbNafxxzDmc5RJlRKmKgZYVC8r655PCFmzvsA5fpm8JlifljZQmnoAgyEzbHPFAXLdXsLdTKhLH2eQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mKP0PGsxWiAbe3Z++CSnEe39J+N4eERze7OVZ3qeGF0=; b=eqmFksHeUmBK29JppN/b7JOIxfrQjs79QPzZYBGUJNSauL/Udlu+O0obgsCkD7m3iY2HUsvxRltVgakcfa1qngLn7CdYCtC/TmpDpsWSTy+TCOFk/D2xEOC56eXBZZ1txLzlD/MGXbYNNURLmpVrsDQ2yQhmKfk5EIQ5NHQkEqvKWwflmbOAxRfbQTYQspjvPvwXYS62T2Y5zkSkZY/CdsagNPhOUBDD9QROyOX0enejhCQbhoMVedqjDRBumLw4x9MsUot89HkJtp+b48q3dRYal23qpaxL18f6M4wJAwIo4xdsLekM/uyrs+YcLddb1RaMth2q4ZvTa+fMHMyVGQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mKP0PGsxWiAbe3Z++CSnEe39J+N4eERze7OVZ3qeGF0=; b=tZXPGRrrXpENeferl6o9Rtb9x2605tGcFEXvpPKe3ob2V95efy/natQmWnKP5GNrA3yLL91jVX/Qpgbt6aAlmVE/sH322G1JOs7MRHiUFlKYF2DVFUQ3N2h0xttgDYK4Wc1nTiL5joa1qYeggIVc1RizBkDOnuHGGd2Ga9C3H6rSntekcvOo5K9X4p8iHgbedQpO6fxZ0wy/xC2jCmsxmg9T464piXFWp9L0MYIqBBxfxK7AMMUQ0XjuaucSVsUi6Kfz6jaktRzNI3oC0yjCCbntoZX4IB4ymO4y7qr+bAAiMLc0+GBwGogsmbujuH/wFvmnoyWKPibweBCF/C+XtA== Received: from BN0PR07CA0007.namprd07.prod.outlook.com (2603:10b6:408:141::24) by MN2PR12MB3821.namprd12.prod.outlook.com (2603:10b6:208:16f::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5458.18; Tue, 26 Jul 2022 07:20:09 +0000 Received: from BN8NAM11FT035.eop-nam11.prod.protection.outlook.com (2603:10b6:408:141:cafe::11) by BN0PR07CA0007.outlook.office365.com (2603:10b6:408:141::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5458.19 via Frontend Transport; Tue, 26 Jul 2022 07:20:09 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT035.mail.protection.outlook.com (10.13.177.116) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5458.17 via Frontend Transport; Tue, 26 Jul 2022 07:20:09 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Tue, 26 Jul 2022 07:19:56 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Tue, 26 Jul 2022 00:19:55 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.986.26 via Frontend Transport; Tue, 26 Jul 2022 00:19:53 -0700 From: Michael Guralnik To: CC: , , , , , , Aharon Landau Subject: [PATCH rdma-next v1 4/5] RDMA/mlx5: Store in the cache mkeys instead of mrs Date: Tue, 26 Jul 2022 10:19:10 +0300 Message-ID: <20220726071911.122765-5-michaelgur@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220726071911.122765-1-michaelgur@nvidia.com> References: <20220726071911.122765-1-michaelgur@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3ad09110-44dd-47e6-bb14-08da6ed745c1 X-MS-TrafficTypeDiagnostic: MN2PR12MB3821:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: gThhRpol5UUUV7EP0o1t1ORoI6jihP6UPYq4GMYVKtq9KbjrDTDwrY0TBzcn+P00rKKV7Ok6X6VNBPFkdkAc7jeR33JglqlV+bIpwhBG/3JP5bkW7ed63J4e7xt/pl1vhNAK5UplDgFsJhhu1RB5mqpMH/Z9m72jCR/FxuuQvlqnw/Qo2TCPAXxU6NqBHyX/cNeihk3r0fIlQ+CjQ5Ge+Jp1JGiWLyRMBppGPiIELo81HH2h+8aZEKG+Qcv9eVHdi7tdUYnTW2iDBfcMs4aQYyNSQI/mEe3ANqgC3Gxixc8N+aNelsP3TBwfXFG2lQw/5sL1KILNQsH+mVscsFnbruSM5tOKOyEXuOzas0T9nD7Om93qKdhJ8MF+/UBhKoCio2n9IInM3N7MIgLQIGvOCKkfZN4nE2s5NDVRVx4zs74g+L4YY8SP9xYlYTqwy6HLR74lHqVRa4MaphrSs83JwJjFCAVDAAipiXvtcE0Oz+cLsdVL2zAdPMgABRrBTs+yu4KnxkeKTkqsyUY59A7uNrHaBaAFZ5jhZy6hvv61Ns3tVb4n/rYlmIMUwBAX2PT2qZP+o/egzjYJJGStsZUaIRcot5nAZs+ahnu7dcUkdzyASlzUgtQWS/3zkHWlzrX+sTykQudoJLqlpYviltUmCTwOp+3WgS1xuirt5oFhQjQyMOR7B6pm+hBGwouTJgThSvGAZFtS+/VHeiRkdcZ4cGoRLyfnL6PYC03gl7LBBUUyrKu3kwRIh8qdBEb3Y/ORGdAtRoC84+A+iMmjRkCKchTSMsYfPgB2BJTBrwCDAafxvd19bTMfpaWjqnmjT8dyyKYM2DF1xs805BMTyjTiyg== X-Forefront-Antispam-Report: CIP:12.22.5.235;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(346002)(396003)(136003)(376002)(36840700001)(40470700004)(46966006)(70586007)(186003)(2906002)(70206006)(41300700001)(356005)(82740400003)(4326008)(8676002)(30864003)(336012)(5660300002)(47076005)(40480700001)(83380400001)(426003)(6862004)(8936002)(478600001)(107886003)(86362001)(36756003)(2616005)(6636002)(54906003)(40460700003)(316002)(6666004)(81166007)(82310400005)(450100002)(26005)(1076003)(7696005)(36860700001)(37006003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jul 2022 07:20:09.1351 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3ad09110-44dd-47e6-bb14-08da6ed745c1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.235];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT035.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3821 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Aharon Landau Currently, the driver stores mlx5_ib_mr struct in the cache entries, although the only use of the cached MR is the mkey. Store only the mkey in the cache. Signed-off-by: Aharon Landau Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/mlx5_ib.h | 26 ++-- drivers/infiniband/hw/mlx5/mr.c | 200 ++++++++++++--------------- 2 files changed, 97 insertions(+), 129 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index da9202f4b5f3..91f985cd7d90 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -619,6 +619,7 @@ struct mlx5_ib_mkey { unsigned int ndescs; struct wait_queue_head wait; refcount_t usecount; + struct mlx5_cache_ent *cache_ent; }; #define MLX5_IB_MTT_PRESENT (MLX5_IB_MTT_READ | MLX5_IB_MTT_WRITE) @@ -641,18 +642,9 @@ struct mlx5_ib_mr { struct ib_mr ibmr; struct mlx5_ib_mkey mmkey; - /* User MR data */ - struct mlx5_cache_ent *cache_ent; - /* Everything after cache_ent is zero'd when MR allocated */ struct ib_umem *umem; union { - /* Used only while the MR is in the cache */ - struct { - u32 out[MLX5_ST_SZ_DW(create_mkey_out)]; - struct mlx5_async_work cb_work; - }; - /* Used only by kernel MRs (umem == NULL) */ struct { void *descs; @@ -692,12 +684,6 @@ struct mlx5_ib_mr { }; }; -/* Zero the fields in the mr that are variant depending on usage */ -static inline void mlx5_clear_mr(struct mlx5_ib_mr *mr) -{ - memset_after(mr, 0, cache_ent); -} - static inline bool is_odp_mr(struct mlx5_ib_mr *mr) { return IS_ENABLED(CONFIG_INFINIBAND_ON_DEMAND_PAGING) && mr->umem && @@ -768,6 +754,16 @@ struct mlx5_cache_ent { struct delayed_work dwork; }; +struct mlx5r_async_create_mkey { + union { + u32 in[MLX5_ST_SZ_BYTES(create_mkey_in)]; + u32 out[MLX5_ST_SZ_DW(create_mkey_out)]; + }; + struct mlx5_async_work cb_work; + struct mlx5_cache_ent *ent; + u32 mkey; +}; + struct mlx5_mr_cache { struct workqueue_struct *wq; struct mlx5_cache_ent ent[MAX_MR_CACHE_ENTRIES]; diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index 26bfdbba24b4..edbc2990d151 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -82,15 +82,14 @@ static void set_mkc_access_pd_addr_fields(void *mkc, int acc, u64 start_addr, MLX5_SET64(mkc, mkc, start_addr, start_addr); } -static void assign_mkey_variant(struct mlx5_ib_dev *dev, - struct mlx5_ib_mkey *mkey, u32 *in) +static void assign_mkey_variant(struct mlx5_ib_dev *dev, u32 *mkey, u32 *in) { u8 key = atomic_inc_return(&dev->mkey_var); void *mkc; mkc = MLX5_ADDR_OF(create_mkey_in, in, memory_key_mkey_entry); MLX5_SET(mkc, mkc, mkey_7_0, key); - mkey->key = key; + *mkey = key; } static int mlx5_ib_create_mkey(struct mlx5_ib_dev *dev, @@ -98,7 +97,7 @@ static int mlx5_ib_create_mkey(struct mlx5_ib_dev *dev, { int ret; - assign_mkey_variant(dev, mkey, in); + assign_mkey_variant(dev, &mkey->key, in); ret = mlx5_core_create_mkey(dev->mdev, &mkey->key, in, inlen); if (!ret) init_waitqueue_head(&mkey->wait); @@ -106,17 +105,18 @@ static int mlx5_ib_create_mkey(struct mlx5_ib_dev *dev, return ret; } -static int -mlx5_ib_create_mkey_cb(struct mlx5_ib_dev *dev, - struct mlx5_ib_mkey *mkey, - struct mlx5_async_ctx *async_ctx, - u32 *in, int inlen, u32 *out, int outlen, - struct mlx5_async_work *context) +static int mlx5_ib_create_mkey_cb(struct mlx5r_async_create_mkey *async_create) { - MLX5_SET(create_mkey_in, in, opcode, MLX5_CMD_OP_CREATE_MKEY); - assign_mkey_variant(dev, mkey, in); - return mlx5_cmd_exec_cb(async_ctx, in, inlen, out, outlen, - create_mkey_callback, context); + struct mlx5_ib_dev *dev = async_create->ent->dev; + size_t inlen = MLX5_ST_SZ_BYTES(create_mkey_in); + size_t outlen = MLX5_ST_SZ_BYTES(create_mkey_out); + + MLX5_SET(create_mkey_in, async_create->in, opcode, + MLX5_CMD_OP_CREATE_MKEY); + assign_mkey_variant(dev, &async_create->mkey, async_create->in); + return mlx5_cmd_exec_cb(&dev->async_ctx, async_create->in, inlen, + async_create->out, outlen, create_mkey_callback, + &async_create->cb_work); } static int mr_cache_max_order(struct mlx5_ib_dev *dev); @@ -209,48 +209,47 @@ static void undo_push_reserve_mkey(struct mlx5_cache_ent *ent) WARN_ON(old); } -static void push_to_reserved(struct mlx5_cache_ent *ent, struct mlx5_ib_mr *mr) +static void push_to_reserved(struct mlx5_cache_ent *ent, u32 mkey) { void *old; - old = __xa_store(&ent->mkeys, ent->stored, mr, 0); + old = __xa_store(&ent->mkeys, ent->stored, xa_mk_value(mkey), 0); WARN_ON(old); ent->stored++; } -static struct mlx5_ib_mr *pop_stored_mkey(struct mlx5_cache_ent *ent) +static u32 pop_stored_mkey(struct mlx5_cache_ent *ent) { - struct mlx5_ib_mr *mr; - void *old; + void *old, *xa_mkey; ent->stored--; ent->reserved--; if (ent->stored == ent->reserved) { - mr = __xa_erase(&ent->mkeys, ent->stored); - WARN_ON(!mr); - return mr; + xa_mkey = __xa_erase(&ent->mkeys, ent->stored); + WARN_ON(!xa_mkey); + return (u32)xa_to_value(xa_mkey); } - mr = __xa_store(&ent->mkeys, ent->stored, XA_ZERO_ENTRY, - GFP_KERNEL); - WARN_ON(!mr || xa_is_err(mr)); + xa_mkey = __xa_store(&ent->mkeys, ent->stored, XA_ZERO_ENTRY, + GFP_KERNEL); + WARN_ON(!xa_mkey || xa_is_err(xa_mkey)); old = __xa_erase(&ent->mkeys, ent->reserved); WARN_ON(old); - return mr; + return (u32)xa_to_value(xa_mkey); } static void create_mkey_callback(int status, struct mlx5_async_work *context) { - struct mlx5_ib_mr *mr = - container_of(context, struct mlx5_ib_mr, cb_work); - struct mlx5_cache_ent *ent = mr->cache_ent; + struct mlx5r_async_create_mkey *mkey_out = + container_of(context, struct mlx5r_async_create_mkey, cb_work); + struct mlx5_cache_ent *ent = mkey_out->ent; struct mlx5_ib_dev *dev = ent->dev; unsigned long flags; if (status) { - create_mkey_warn(dev, status, mr->out); - kfree(mr); + create_mkey_warn(dev, status, mkey_out->out); + kfree(mkey_out); xa_lock_irqsave(&ent->mkeys, flags); undo_push_reserve_mkey(ent); WRITE_ONCE(dev->fill_delay, 1); @@ -259,18 +258,16 @@ static void create_mkey_callback(int status, struct mlx5_async_work *context) return; } - mr->mmkey.type = MLX5_MKEY_MR; - mr->mmkey.key |= mlx5_idx_to_mkey( - MLX5_GET(create_mkey_out, mr->out, mkey_index)); - init_waitqueue_head(&mr->mmkey.wait); - + mkey_out->mkey |= mlx5_idx_to_mkey( + MLX5_GET(create_mkey_out, mkey_out->out, mkey_index)); WRITE_ONCE(dev->cache.last_add, jiffies); xa_lock_irqsave(&ent->mkeys, flags); - push_to_reserved(ent, mr); + push_to_reserved(ent, mkey_out->mkey); /* If we are doing fill_to_high_water then keep going. */ queue_adjust_cache_locked(ent); xa_unlock_irqrestore(&ent->mkeys, flags); + kfree(mkey_out); } static int get_mkc_octo_size(unsigned int access_mode, unsigned int ndescs) @@ -292,15 +289,8 @@ static int get_mkc_octo_size(unsigned int access_mode, unsigned int ndescs) return ret; } -static struct mlx5_ib_mr *alloc_cache_mr(struct mlx5_cache_ent *ent, void *mkc) +static void set_cache_mkc(struct mlx5_cache_ent *ent, void *mkc) { - struct mlx5_ib_mr *mr; - - mr = kzalloc(sizeof(*mr), GFP_KERNEL); - if (!mr) - return NULL; - mr->cache_ent = ent; - set_mkc_access_pd_addr_fields(mkc, 0, 0, ent->dev->umrc.pd); MLX5_SET(mkc, mkc, free, 1); MLX5_SET(mkc, mkc, umr_en, 1); @@ -310,106 +300,82 @@ static struct mlx5_ib_mr *alloc_cache_mr(struct mlx5_cache_ent *ent, void *mkc) MLX5_SET(mkc, mkc, translations_octword_size, get_mkc_octo_size(ent->access_mode, ent->ndescs)); MLX5_SET(mkc, mkc, log_page_size, ent->page); - return mr; } /* Asynchronously schedule new MRs to be populated in the cache. */ static int add_keys(struct mlx5_cache_ent *ent, unsigned int num) { - size_t inlen = MLX5_ST_SZ_BYTES(create_mkey_in); - struct mlx5_ib_mr *mr; + struct mlx5r_async_create_mkey *async_create; void *mkc; - u32 *in; int err = 0; int i; - in = kzalloc(inlen, GFP_KERNEL); - if (!in) - return -ENOMEM; - - mkc = MLX5_ADDR_OF(create_mkey_in, in, memory_key_mkey_entry); for (i = 0; i < num; i++) { - mr = alloc_cache_mr(ent, mkc); - if (!mr) { - err = -ENOMEM; - goto free_in; - } + async_create = kzalloc(sizeof(struct mlx5r_async_create_mkey), + GFP_KERNEL); + if (!async_create) + return -ENOMEM; + mkc = MLX5_ADDR_OF(create_mkey_in, async_create->in, + memory_key_mkey_entry); + set_cache_mkc(ent, mkc); + async_create->ent = ent; err = push_mkey(ent, true, NULL); if (err) - goto free_mr; + goto free_async_create; - err = mlx5_ib_create_mkey_cb(ent->dev, &mr->mmkey, - &ent->dev->async_ctx, in, inlen, - mr->out, sizeof(mr->out), - &mr->cb_work); + err = mlx5_ib_create_mkey_cb(async_create); if (err) { mlx5_ib_warn(ent->dev, "create mkey failed %d\n", err); goto err_undo_reserve; } } - kfree(in); return 0; err_undo_reserve: xa_lock_irq(&ent->mkeys); undo_push_reserve_mkey(ent); xa_unlock_irq(&ent->mkeys); -free_mr: - kfree(mr); -free_in: - kfree(in); +free_async_create: + kfree(async_create); return err; } /* Synchronously create a MR in the cache */ -static struct mlx5_ib_mr *create_cache_mr(struct mlx5_cache_ent *ent) +static int create_cache_mkey(struct mlx5_cache_ent *ent, u32 *mkey) { size_t inlen = MLX5_ST_SZ_BYTES(create_mkey_in); - struct mlx5_ib_mr *mr; void *mkc; u32 *in; int err; in = kzalloc(inlen, GFP_KERNEL); if (!in) - return ERR_PTR(-ENOMEM); + return -ENOMEM; mkc = MLX5_ADDR_OF(create_mkey_in, in, memory_key_mkey_entry); + set_cache_mkc(ent, mkc); - mr = alloc_cache_mr(ent, mkc); - if (!mr) { - err = -ENOMEM; - goto free_in; - } - - err = mlx5_core_create_mkey(ent->dev->mdev, &mr->mmkey.key, in, inlen); + err = mlx5_core_create_mkey(ent->dev->mdev, mkey, in, inlen); if (err) - goto free_mr; + goto free_in; - init_waitqueue_head(&mr->mmkey.wait); - mr->mmkey.type = MLX5_MKEY_MR; WRITE_ONCE(ent->dev->cache.last_add, jiffies); - kfree(in); - return mr; -free_mr: - kfree(mr); free_in: kfree(in); - return ERR_PTR(err); + return err; } static void remove_cache_mr_locked(struct mlx5_cache_ent *ent) { - struct mlx5_ib_mr *mr; + u32 mkey; lockdep_assert_held(&ent->mkeys.xa_lock); if (!ent->stored) return; - mr = pop_stored_mkey(ent); + mkey = pop_stored_mkey(ent); xa_unlock_irq(&ent->mkeys); - mlx5_core_destroy_mkey(ent->dev->mdev, mr->mmkey.key); - kfree(mr); + mlx5_core_destroy_mkey(ent->dev->mdev, mkey); xa_lock_irq(&ent->mkeys); } @@ -678,11 +644,15 @@ struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, int access_flags) { struct mlx5_ib_mr *mr; + int err; - /* Matches access in alloc_cache_mr() */ if (!mlx5r_umr_can_reconfig(dev, 0, access_flags)) return ERR_PTR(-EOPNOTSUPP); + mr = kzalloc(sizeof(*mr), GFP_KERNEL); + if (!mr) + return ERR_PTR(-ENOMEM); + xa_lock_irq(&ent->mkeys); ent->in_use++; @@ -690,20 +660,22 @@ struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, queue_adjust_cache_locked(ent); ent->miss++; xa_unlock_irq(&ent->mkeys); - mr = create_cache_mr(ent); - if (IS_ERR(mr)) { + err = create_cache_mkey(ent, &mr->mmkey.key); + if (err) { xa_lock_irq(&ent->mkeys); ent->in_use--; xa_unlock_irq(&ent->mkeys); - return mr; + kfree(mr); + return ERR_PTR(err); } } else { - mr = pop_stored_mkey(ent); + mr->mmkey.key = pop_stored_mkey(ent); queue_adjust_cache_locked(ent); xa_unlock_irq(&ent->mkeys); - - mlx5_clear_mr(mr); } + mr->mmkey.cache_ent = ent; + mr->mmkey.type = MLX5_MKEY_MR; + init_waitqueue_head(&mr->mmkey.wait); return mr; } @@ -711,15 +683,14 @@ static void clean_keys(struct mlx5_ib_dev *dev, int c) { struct mlx5_mr_cache *cache = &dev->cache; struct mlx5_cache_ent *ent = &cache->ent[c]; - struct mlx5_ib_mr *mr; + u32 mkey; cancel_delayed_work(&ent->dwork); xa_lock_irq(&ent->mkeys); while (ent->stored) { - mr = pop_stored_mkey(ent); + mkey = pop_stored_mkey(ent); xa_unlock_irq(&ent->mkeys); - mlx5_core_destroy_mkey(dev->mdev, mr->mmkey.key); - kfree(mr); + mlx5_core_destroy_mkey(dev->mdev, mkey); xa_lock_irq(&ent->mkeys); } xa_unlock_irq(&ent->mkeys); @@ -1391,7 +1362,7 @@ static bool can_use_umr_rereg_pas(struct mlx5_ib_mr *mr, struct mlx5_ib_dev *dev = to_mdev(mr->ibmr.device); /* We only track the allocated sizes of MRs from the cache */ - if (!mr->cache_ent) + if (!mr->mmkey.cache_ent) return false; if (!mlx5r_umr_can_load_pas(dev, new_umem->length)) return false; @@ -1400,7 +1371,7 @@ static bool can_use_umr_rereg_pas(struct mlx5_ib_mr *mr, mlx5_umem_find_best_pgsz(new_umem, mkc, log_page_size, 0, iova); if (WARN_ON(!*page_size)) return false; - return (1ULL << mr->cache_ent->order) >= + return (1ULL << mr->mmkey.cache_ent->order) >= ib_umem_num_dma_blocks(new_umem, *page_size); } @@ -1641,16 +1612,17 @@ int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) } /* Stop DMA */ - if (mr->cache_ent) { - xa_lock_irq(&mr->cache_ent->mkeys); - mr->cache_ent->in_use--; - xa_unlock_irq(&mr->cache_ent->mkeys); + if (mr->mmkey.cache_ent) { + xa_lock_irq(&mr->mmkey.cache_ent->mkeys); + mr->mmkey.cache_ent->in_use--; + xa_unlock_irq(&mr->mmkey.cache_ent->mkeys); if (mlx5r_umr_revoke_mr(mr) || - push_mkey(mr->cache_ent, false, mr)) - mr->cache_ent = NULL; + push_mkey(mr->mmkey.cache_ent, false, + xa_mk_value(mr->mmkey.key))) + mr->mmkey.cache_ent = NULL; } - if (!mr->cache_ent) { + if (!mr->mmkey.cache_ent) { rc = destroy_mkey(to_mdev(mr->ibmr.device), mr); if (rc) return rc; @@ -1667,10 +1639,10 @@ int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) mlx5_ib_free_odp_mr(mr); } - if (!mr->cache_ent) { + if (!mr->mmkey.cache_ent) mlx5_free_priv_descs(mr); - kfree(mr); - } + + kfree(mr); return 0; } From patchwork Tue Jul 26 07:19:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Guralnik X-Patchwork-Id: 12928867 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 620E0C43334 for ; Tue, 26 Jul 2022 07:20:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237998AbiGZHUQ (ORCPT ); Tue, 26 Jul 2022 03:20:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237992AbiGZHUM (ORCPT ); Tue, 26 Jul 2022 03:20:12 -0400 Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2047.outbound.protection.outlook.com [40.107.102.47]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF05C2AC67; Tue, 26 Jul 2022 00:20:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=h1y4q8PhH/oirM1Z8lUZ3jXbGGgBl6I3+/34oTULfmlj+tVH7o/3giUkq8MhweYqDSjSClZzkz9fzRZOQHn3YnS6hVqZLcEk29SCiCdwid9FcGbBcK4RsRTHBJJ3b9gWUykzJ5TioGo9A1rNb3/p8s39HpVc8/6oG/3ELrm+uuxyLR1cnEbKAYh9iDzf5LOXrXdmcAuhxEVqNhjdfnwrjpOJo3NC7tMM2Y3cCZNGsvtHEPzAOM/4jljtXlwQ5V/aY9o+N27mJ5/KvVUkcLYZn+pn0OdTNarTasnb6XlfjINQIVcPS9wujDM6uqwxVDhsjnIxTMKIHISyvh9Lt4V+Nw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1h/RM+U8dWqlmKglnuFcCperbDOZ3naKWbA2MLGkuF0=; b=IFLQYr8HH5iYyplBQMOa2YI6ZcolkiNp7M+TusMIlO5A1/S+oty1JH67SBvSZicuND5GYFkKvV0mEbpVj9sveAE+QMOGV/b99sa5h2EVSvjEsSLyQ9iLvshGFMk4LNZJw8dvsGbmynbtiZWxpmHOmb1UbEoOPPRR8v3gxHTjA3MhvVmtkUabtH7rhvFmk9ixP09ZqL8W/m8ecyL4vjDiExpVyoCIggvkmCwVMkcJUvPFBV81Rc1G76XBjUp8qSYbylI1Ch92teH1JwF7zQ7YcTbVGzyIvc5cjuial7hw2q6nN/NSnDGcPmyd69DyzZmvewljqwkLS3iL/igr1G9OOg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1h/RM+U8dWqlmKglnuFcCperbDOZ3naKWbA2MLGkuF0=; b=prh1hW3ffTJDDOhLNxotg6XuM8Y3TBqs7SWYfq2gaP2Yne0MUad/Levvf+Xz2BE+ctft68+s2JMzKfk3pZS7gkACAoftikMa4tWNxr+FreT1kYptp3ae0KrEIU8i3QOv48K8fMk871QGMo75qK6J7IyB84SdfSIK1KlOwg6X03YebasOyWxGEFRuWXR3NT9veQlQl8X6TMi9+T5AkwfbI/SgdAXGbi+ZoG1qvfEqEefE1QhL6Rup0AlldDEPRsFbOviJGCipjjfLLOXoZVFW87k+HRpi55JBfB/5ByndAEeLw1cXbOhN0SCIN7IDS+juFKfcZLnxumWRdkhm6SOthw== Received: from BN9PR03CA0389.namprd03.prod.outlook.com (2603:10b6:408:f7::34) by MN2PR12MB3471.namprd12.prod.outlook.com (2603:10b6:208:c8::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5458.24; Tue, 26 Jul 2022 07:20:00 +0000 Received: from BN8NAM11FT037.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f7:cafe::da) by BN9PR03CA0389.outlook.office365.com (2603:10b6:408:f7::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5458.19 via Frontend Transport; Tue, 26 Jul 2022 07:20:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.234) by BN8NAM11FT037.mail.protection.outlook.com (10.13.177.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5458.17 via Frontend Transport; Tue, 26 Jul 2022 07:20:00 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Tue, 26 Jul 2022 07:19:59 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Tue, 26 Jul 2022 00:19:59 -0700 Received: from vdi.nvidia.com (10.127.8.14) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.986.26 via Frontend Transport; Tue, 26 Jul 2022 00:19:56 -0700 From: Michael Guralnik To: CC: , , , , , , Aharon Landau Subject: [PATCH rdma-next v1 5/5] RDMA/mlx5: Rename the mkey cache variables and functions Date: Tue, 26 Jul 2022 10:19:11 +0300 Message-ID: <20220726071911.122765-6-michaelgur@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220726071911.122765-1-michaelgur@nvidia.com> References: <20220726071911.122765-1-michaelgur@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b37967d8-72a5-4630-8b63-08da6ed74064 X-MS-TrafficTypeDiagnostic: MN2PR12MB3471:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: AYjr9LS3y9xrRz+USBjcD7J3UdjVBHf9yJK7XfWdIlkmIhuaOloYVoKCYMQzE8kbzDqsTI7lDXuUggNMoqW0E1DRUagGWQ7gOIkiPS6UY3ubrz58mcRPgqIbkqZyDm71Pnu8vIDBWfgrxd7D8+mKzYgFuG/LQHQZZgbo0ShKJMMWw/PfIs5OKWPFKNBcTrPSMwYysdAw6GXTCkm2EYuzLSjQ+0Ghw6SwiT243SZQsiluuZzf+WMMvxVskkR/aYMeuud4rN1ftM2bBLZZISlDhqIyEmy4Iz4G8VyZM0Ff1dZDeQAGvn3ut8NXh38D0UKJ6Aae573jurOk8rU0regaFCIq00UpLS9pYvj7FiG4IJA/O2UerP3a5VtxIohf5l/0UXrVZEBi2iGGYdnNaF/7yBfqTr+/uKNdejwbE2HyA1zkSKEHUvyzZ6NRC01tP+YirTmHganiTq3f8UvIV4dXCRjtMTB5jDj4nzf06UHE16Kq63DuR7YXYal+zfQHZLGdjiVgBD718A2KREEplhaDOPZ0OJ7SC4ky9XJDOpuxhXcdgFjDjA3jaQGq8tpO0UcMtZqsuBuhF3wqn5+/rI0zEwABN6ujiFD0240rQFR2t64AA5XmPpHgQt+/qoIjVYqVufsCCS8bEh8/4UbenSKBXy5/LHA2X+cEI3mExCrgirq5RnGrCID8Y++3IVczA9c0BC8FFKLketrkqFuqIPrs8dhQHmkVuzZnynSUCdfPcQM8l2VdhWrCMTxndbT5zypp4eCiIox4IxrvXsfUhJhtx3ScAjaAsKyZPIcBoMbeaXVdMi+tWeGaly32MS88htj65vB/IXCLsq3zYQpridjv1g== X-Forefront-Antispam-Report: CIP:12.22.5.234;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(346002)(39860400002)(376002)(136003)(396003)(46966006)(36840700001)(40470700004)(70586007)(356005)(82740400003)(36860700001)(81166007)(426003)(86362001)(4326008)(8676002)(40460700003)(54906003)(450100002)(70206006)(316002)(37006003)(6636002)(2906002)(8936002)(83380400001)(7696005)(82310400005)(5660300002)(107886003)(2616005)(47076005)(26005)(1076003)(6862004)(30864003)(478600001)(6666004)(36756003)(41300700001)(186003)(40480700001)(336012)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jul 2022 07:20:00.2139 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b37967d8-72a5-4630-8b63-08da6ed74064 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.234];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT037.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3471 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Aharon Landau After replacing the MR cache with an Mkey cache, rename the variables and functions to fit the new meaning. Signed-off-by: Aharon Landau Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/main.c | 4 +-- drivers/infiniband/hw/mlx5/mlx5_ib.h | 14 ++++---- drivers/infiniband/hw/mlx5/mr.c | 54 ++++++++++++++-------------- drivers/infiniband/hw/mlx5/odp.c | 2 +- include/linux/mlx5/driver.h | 6 ++-- 5 files changed, 40 insertions(+), 40 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index b68fddeac0f1..a174a0eee8dc 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -4002,7 +4002,7 @@ static void mlx5_ib_stage_pre_ib_reg_umr_cleanup(struct mlx5_ib_dev *dev) { int err; - err = mlx5_mr_cache_cleanup(dev); + err = mlx5_mkey_cache_cleanup(dev); if (err) mlx5_ib_warn(dev, "mr cache cleanup failed\n"); @@ -4022,7 +4022,7 @@ static int mlx5_ib_stage_post_ib_reg_umr_init(struct mlx5_ib_dev *dev) if (ret) return ret; - ret = mlx5_mr_cache_init(dev); + ret = mlx5_mkey_cache_init(dev); if (ret) { mlx5_ib_warn(dev, "mr cache init failed %d\n", ret); mlx5r_umr_resource_cleanup(dev); diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 91f985cd7d90..2e2ad3918385 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -764,9 +764,9 @@ struct mlx5r_async_create_mkey { u32 mkey; }; -struct mlx5_mr_cache { +struct mlx5_mkey_cache { struct workqueue_struct *wq; - struct mlx5_cache_ent ent[MAX_MR_CACHE_ENTRIES]; + struct mlx5_cache_ent ent[MAX_MKEY_CACHE_ENTRIES]; struct dentry *root; unsigned long last_add; }; @@ -1065,7 +1065,7 @@ struct mlx5_ib_dev { struct mlx5_ib_resources devr; atomic_t mkey_var; - struct mlx5_mr_cache cache; + struct mlx5_mkey_cache cache; struct timer_list delay_timer; /* Prevents soft lock on massive reg MRs */ struct mutex slow_path_mutex; @@ -1310,8 +1310,8 @@ void mlx5_ib_populate_pas(struct ib_umem *umem, size_t page_size, __be64 *pas, u64 access_flags); void mlx5_ib_copy_pas(u64 *old, u64 *new, int step, int num); int mlx5_ib_get_cqe_size(struct ib_cq *ibcq); -int mlx5_mr_cache_init(struct mlx5_ib_dev *dev); -int mlx5_mr_cache_cleanup(struct mlx5_ib_dev *dev); +int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev); +int mlx5_mkey_cache_cleanup(struct mlx5_ib_dev *dev); struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, struct mlx5_cache_ent *ent, @@ -1339,7 +1339,7 @@ int mlx5r_odp_create_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq); void mlx5_ib_odp_cleanup_one(struct mlx5_ib_dev *ibdev); int __init mlx5_ib_odp_init(void); void mlx5_ib_odp_cleanup(void); -void mlx5_odp_init_mr_cache_entry(struct mlx5_cache_ent *ent); +void mlx5_odp_init_mkey_cache_entry(struct mlx5_cache_ent *ent); void mlx5_odp_populate_xlt(void *xlt, size_t idx, size_t nentries, struct mlx5_ib_mr *mr, int flags); @@ -1358,7 +1358,7 @@ static inline int mlx5r_odp_create_eq(struct mlx5_ib_dev *dev, static inline void mlx5_ib_odp_cleanup_one(struct mlx5_ib_dev *ibdev) {} static inline int mlx5_ib_odp_init(void) { return 0; } static inline void mlx5_ib_odp_cleanup(void) {} -static inline void mlx5_odp_init_mr_cache_entry(struct mlx5_cache_ent *ent) {} +static inline void mlx5_odp_init_mkey_cache_entry(struct mlx5_cache_ent *ent) {} static inline void mlx5_odp_populate_xlt(void *xlt, size_t idx, size_t nentries, struct mlx5_ib_mr *mr, int flags) {} diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index edbc2990d151..129d531bd01b 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -119,7 +119,7 @@ static int mlx5_ib_create_mkey_cb(struct mlx5r_async_create_mkey *async_create) &async_create->cb_work); } -static int mr_cache_max_order(struct mlx5_ib_dev *dev); +static int mkey_cache_max_order(struct mlx5_ib_dev *dev); static void queue_adjust_cache_locked(struct mlx5_cache_ent *ent); static int destroy_mkey(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr) @@ -515,11 +515,11 @@ static const struct file_operations limit_fops = { .read = limit_read, }; -static bool someone_adding(struct mlx5_mr_cache *cache) +static bool someone_adding(struct mlx5_mkey_cache *cache) { unsigned int i; - for (i = 0; i < MAX_MR_CACHE_ENTRIES; i++) { + for (i = 0; i < MAX_MKEY_CACHE_ENTRIES; i++) { struct mlx5_cache_ent *ent = &cache->ent[i]; bool ret; @@ -569,7 +569,7 @@ static void queue_adjust_cache_locked(struct mlx5_cache_ent *ent) static void __cache_work_func(struct mlx5_cache_ent *ent) { struct mlx5_ib_dev *dev = ent->dev; - struct mlx5_mr_cache *cache = &dev->cache; + struct mlx5_mkey_cache *cache = &dev->cache; int err; xa_lock_irq(&ent->mkeys); @@ -681,7 +681,7 @@ struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev, static void clean_keys(struct mlx5_ib_dev *dev, int c) { - struct mlx5_mr_cache *cache = &dev->cache; + struct mlx5_mkey_cache *cache = &dev->cache; struct mlx5_cache_ent *ent = &cache->ent[c]; u32 mkey; @@ -696,7 +696,7 @@ static void clean_keys(struct mlx5_ib_dev *dev, int c) xa_unlock_irq(&ent->mkeys); } -static void mlx5_mr_cache_debugfs_cleanup(struct mlx5_ib_dev *dev) +static void mlx5_mkey_cache_debugfs_cleanup(struct mlx5_ib_dev *dev) { if (!mlx5_debugfs_root || dev->is_rep) return; @@ -705,9 +705,9 @@ static void mlx5_mr_cache_debugfs_cleanup(struct mlx5_ib_dev *dev) dev->cache.root = NULL; } -static void mlx5_mr_cache_debugfs_init(struct mlx5_ib_dev *dev) +static void mlx5_mkey_cache_debugfs_init(struct mlx5_ib_dev *dev) { - struct mlx5_mr_cache *cache = &dev->cache; + struct mlx5_mkey_cache *cache = &dev->cache; struct mlx5_cache_ent *ent; struct dentry *dir; int i; @@ -717,7 +717,7 @@ static void mlx5_mr_cache_debugfs_init(struct mlx5_ib_dev *dev) cache->root = debugfs_create_dir("mr_cache", mlx5_debugfs_get_dev_root(dev->mdev)); - for (i = 0; i < MAX_MR_CACHE_ENTRIES; i++) { + for (i = 0; i < MAX_MKEY_CACHE_ENTRIES; i++) { ent = &cache->ent[i]; sprintf(ent->name, "%d", ent->order); dir = debugfs_create_dir(ent->name, cache->root); @@ -735,9 +735,9 @@ static void delay_time_func(struct timer_list *t) WRITE_ONCE(dev->fill_delay, 0); } -int mlx5_mr_cache_init(struct mlx5_ib_dev *dev) +int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev) { - struct mlx5_mr_cache *cache = &dev->cache; + struct mlx5_mkey_cache *cache = &dev->cache; struct mlx5_cache_ent *ent; int i; @@ -750,7 +750,7 @@ int mlx5_mr_cache_init(struct mlx5_ib_dev *dev) mlx5_cmd_init_async_ctx(dev->mdev, &dev->async_ctx); timer_setup(&dev->delay_timer, delay_time_func, 0); - for (i = 0; i < MAX_MR_CACHE_ENTRIES; i++) { + for (i = 0; i < MAX_MKEY_CACHE_ENTRIES; i++) { ent = &cache->ent[i]; xa_init_flags(&ent->mkeys, XA_FLAGS_LOCK_IRQ); ent->order = i + 2; @@ -759,12 +759,12 @@ int mlx5_mr_cache_init(struct mlx5_ib_dev *dev) INIT_DELAYED_WORK(&ent->dwork, delayed_cache_work_func); - if (i > MR_CACHE_LAST_STD_ENTRY) { - mlx5_odp_init_mr_cache_entry(ent); + if (i > MKEY_CACHE_LAST_STD_ENTRY) { + mlx5_odp_init_mkey_cache_entry(ent); continue; } - if (ent->order > mr_cache_max_order(dev)) + if (ent->order > mkey_cache_max_order(dev)) continue; ent->page = PAGE_SHIFT; @@ -781,19 +781,19 @@ int mlx5_mr_cache_init(struct mlx5_ib_dev *dev) xa_unlock_irq(&ent->mkeys); } - mlx5_mr_cache_debugfs_init(dev); + mlx5_mkey_cache_debugfs_init(dev); return 0; } -int mlx5_mr_cache_cleanup(struct mlx5_ib_dev *dev) +int mlx5_mkey_cache_cleanup(struct mlx5_ib_dev *dev) { unsigned int i; if (!dev->cache.wq) return 0; - for (i = 0; i < MAX_MR_CACHE_ENTRIES; i++) { + for (i = 0; i < MAX_MKEY_CACHE_ENTRIES; i++) { struct mlx5_cache_ent *ent = &dev->cache.ent[i]; xa_lock_irq(&ent->mkeys); @@ -802,10 +802,10 @@ int mlx5_mr_cache_cleanup(struct mlx5_ib_dev *dev) cancel_delayed_work_sync(&ent->dwork); } - mlx5_mr_cache_debugfs_cleanup(dev); + mlx5_mkey_cache_debugfs_cleanup(dev); mlx5_cmd_cleanup_async_ctx(&dev->async_ctx); - for (i = 0; i < MAX_MR_CACHE_ENTRIES; i++) + for (i = 0; i < MAX_MKEY_CACHE_ENTRIES; i++) clean_keys(dev, i); destroy_workqueue(dev->cache.wq); @@ -872,22 +872,22 @@ static int get_octo_len(u64 addr, u64 len, int page_shift) return (npages + 1) / 2; } -static int mr_cache_max_order(struct mlx5_ib_dev *dev) +static int mkey_cache_max_order(struct mlx5_ib_dev *dev) { if (MLX5_CAP_GEN(dev->mdev, umr_extended_translation_offset)) - return MR_CACHE_LAST_STD_ENTRY + 2; + return MKEY_CACHE_LAST_STD_ENTRY + 2; return MLX5_MAX_UMR_SHIFT; } -static struct mlx5_cache_ent *mr_cache_ent_from_order(struct mlx5_ib_dev *dev, - unsigned int order) +static struct mlx5_cache_ent *mkey_cache_ent_from_order(struct mlx5_ib_dev *dev, + unsigned int order) { - struct mlx5_mr_cache *cache = &dev->cache; + struct mlx5_mkey_cache *cache = &dev->cache; if (order < cache->ent[0].order) return &cache->ent[0]; order = order - cache->ent[0].order; - if (order > MR_CACHE_LAST_STD_ENTRY) + if (order > MKEY_CACHE_LAST_STD_ENTRY) return NULL; return &cache->ent[order]; } @@ -930,7 +930,7 @@ static struct mlx5_ib_mr *alloc_cacheable_mr(struct ib_pd *pd, 0, iova); if (WARN_ON(!page_size)) return ERR_PTR(-EINVAL); - ent = mr_cache_ent_from_order( + ent = mkey_cache_ent_from_order( dev, order_base_2(ib_umem_num_dma_blocks(umem, page_size))); /* * Matches access in alloc_cache_mr(). If the MR can't come from the diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index 84da5674e1ab..e305bf1dc6c2 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -1588,7 +1588,7 @@ mlx5_ib_odp_destroy_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq) return err; } -void mlx5_odp_init_mr_cache_entry(struct mlx5_cache_ent *ent) +void mlx5_odp_init_mkey_cache_entry(struct mlx5_cache_ent *ent) { if (!(ent->dev->odp_caps.general_caps & IB_ODP_SUPPORT_IMPLICIT)) return; diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index 76d7661e3e63..220597c2f436 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -728,10 +728,10 @@ enum { }; enum { - MR_CACHE_LAST_STD_ENTRY = 20, + MKEY_CACHE_LAST_STD_ENTRY = 20, MLX5_IMR_MTT_CACHE_ENTRY, MLX5_IMR_KSM_CACHE_ENTRY, - MAX_MR_CACHE_ENTRIES + MAX_MKEY_CACHE_ENTRIES }; struct mlx5_profile { @@ -740,7 +740,7 @@ struct mlx5_profile { struct { int size; int limit; - } mr_cache[MAX_MR_CACHE_ENTRIES]; + } mr_cache[MAX_MKEY_CACHE_ENTRIES]; }; struct mlx5_hca_cap {