From patchwork Tue Nov 3 19:47:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 11878963 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17ED0C388F7 for ; Tue, 3 Nov 2020 19:48:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B802A216C4 for ; Tue, 3 Nov 2020 19:48:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="C6xjqEoT" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729791AbgKCTsb (ORCPT ); Tue, 3 Nov 2020 14:48:31 -0500 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:7326 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729737AbgKCTsK (ORCPT ); Tue, 3 Nov 2020 14:48:10 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 03 Nov 2020 11:48:13 -0800 Received: from sx1.mtl.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 3 Nov 2020 19:48:08 +0000 From: Saeed Mahameed To: Jakub Kicinski CC: , "David S. Miller" , "Yevgeny Kliteynik" , Erez Shitrit , Alex Vesker , Mark Bloch , Saeed Mahameed Subject: [net-next 06/12] net/mlx5: DR, Sync chunks only during free Date: Tue, 3 Nov 2020 11:47:32 -0800 Message-ID: <20201103194738.64061-7-saeedm@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201103194738.64061-1-saeedm@nvidia.com> References: <20201103194738.64061-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1604432894; bh=/CXbSfyI/MbYr2+Ar+3wtDX8QM7y3m+T9ksjUkE6JnM=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=C6xjqEoTN0xQBVwSDUVDhvQWJd7BYSdoxntsZZPXCamNsJLATb10N6TK1MHcy1VfC C9g2Eb+kT8AiTkGxEtS/dx5iWLBMxK4/cuB3g+SxQTniOM0ULWyqCK/tMCkQ2SBnoi dVcQ3JfX3LrmOqHzleZVUZSPGgDI5bEwnk/A1ZatS9rctIAQ/GAGJvWBoT+ioHL3Ca 0leKwqBDE1mr9CGKuiEpwyQtL9btbG6NQyyCg0bX+fANXc6SpUvAFWA5hOciLrsHps LTVpjXbXSotbvq30uRXbNJN86H7dBjZL8FLbv8Do21v3jQivTwo8piiwweswxRfRaT SG9uSg3sAwcpg== Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Yevgeny Kliteynik When freeing chunks, we want to sync the steering so that all the "hot" memory will be written to ICM and all the chunks that are in the hot_list will be actually destroyed. When allocating from the pool, we don't have a need to sync the steering, as we're not freeing anything, and sync might just hurt the performance in terms of flow-per-second offloaded. Signed-off-by: Erez Shitrit Signed-off-by: Yevgeny Kliteynik Reviewed-by: Alex Vesker Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- .../mellanox/mlx5/core/steering/dr_icm_pool.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c index 2c5886b469f7..4d8330aab169 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c @@ -332,10 +332,6 @@ static int dr_icm_handle_buddies_get_mem(struct mlx5dr_icm_pool *pool, bool new_mem = false; int err; - /* Check if we have chunks that are waiting for sync-ste */ - if (dr_icm_pool_is_sync_required(pool)) - dr_icm_pool_sync_all_buddy_pools(pool); - alloc_buddy_mem: /* find the next free place from the buddy list */ list_for_each_entry(buddy_mem_pool, &pool->buddy_mem_list, list_node) { @@ -409,12 +405,18 @@ mlx5dr_icm_alloc_chunk(struct mlx5dr_icm_pool *pool, void mlx5dr_icm_free_chunk(struct mlx5dr_icm_chunk *chunk) { struct mlx5dr_icm_buddy_mem *buddy = chunk->buddy_mem; + struct mlx5dr_icm_pool *pool = buddy->pool; /* move the memory to the waiting list AKA "hot" */ - mutex_lock(&buddy->pool->mutex); + mutex_lock(&pool->mutex); list_move_tail(&chunk->chunk_list, &buddy->hot_list); buddy->hot_memory_size += chunk->byte_size; - mutex_unlock(&buddy->pool->mutex); + + /* Check if we have chunks that are waiting for sync-ste */ + if (dr_icm_pool_is_sync_required(pool)) + dr_icm_pool_sync_all_buddy_pools(pool); + + mutex_unlock(&pool->mutex); } struct mlx5dr_icm_pool *mlx5dr_icm_pool_create(struct mlx5dr_domain *dmn,