From patchwork Sun Oct 27 14:21:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13852607 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3A97A1D017C; Sun, 27 Oct 2024 14:22:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730038957; cv=none; b=Zi6ArnLG1Hut47MQI6SxDoI7dmDOhAejihbHVvsfp4FqlKGfklRx830NnhJ/T0DFcdmNRBFiFk0urvrk3Q64+n2owC04OIM/FDu3SEofGTZoWm8oKdta8D1o/NE6giwAmUSlfQxLWcxcGSvRUI58VjoHJGLvJT6aJGFNEsCPcYU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730038957; c=relaxed/simple; bh=0oAZb2A1AcGBYwYJqLqYoF06Rd3XR3a/YcihuL4Yd4c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=t/RwWAmDbTpanW1CqLlSPhufAO3t2IYy4zpEibsimwDGCl1MVsNncmtu5Yy5eh2a+FU9Y1hEVnsaR3qt5CMGKGpyT+1o+un55m/Tj/5LHxkHFWtUaY4WzAXHVbXtUFU7wbEBmxm/hFr1lopLl33LNXmJVaLc9K0rxXzxHV1VtHU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=XnYNHf81; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="XnYNHf81" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0785CC4CEE5; Sun, 27 Oct 2024 14:22:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730038956; bh=0oAZb2A1AcGBYwYJqLqYoF06Rd3XR3a/YcihuL4Yd4c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XnYNHf81XRJqGaj3hsxMDwA7dy6KsvsmA+yLe06qPkXYN2xa++1K94IKvJFVhF3WJ T4nkxQIbD+/taXB1cTkhu7YVDVMs52CwQsF6Amd9bdNSuoW6OOoIjgXvVDwmEvZguv 8WlPW9lCV2ndSXcOd/O3Yle9aIqIcn5rb/jdxIv+xfewF90Snln3D0sQEOog6AYL9f sGWvMWAh24LWTaEyPcu71Fg+Q1qBrTCefvjH+3Eda+IVvo3/EjNd7zcWwc1s35Yx2V 0GfmH2Gg2xDEgchvQ6+NhsCw8PELJVVU3eaVeCo4YhOOnUEW7QANXpX8NQDUgVH6hg Njugb3kh4Lfgg== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 17/18] vfio/mlx5: Explicitly store page list Date: Sun, 27 Oct 2024 16:21:17 +0200 Message-ID: <4c0a2f96d672d395caec3fe7dd6049a48b2582c6.1730037276.git.leon@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Leon Romanovsky As a preparation to removal scatter-gather table and unifying receive and send list, explicitly store page list. Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/mlx5/cmd.c | 29 ++++++++++++----------------- drivers/vfio/pci/mlx5/cmd.h | 1 + 2 files changed, 13 insertions(+), 17 deletions(-) diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c index 1832a6c1f35d..34ae3e299a9e 100644 --- a/drivers/vfio/pci/mlx5/cmd.c +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -422,6 +422,7 @@ void mlx5vf_free_data_buffer(struct mlx5_vhca_data_buffer *buf) for_each_sgtable_page(&buf->table.sgt, &sg_iter, 0) __free_page(sg_page_iter_page(&sg_iter)); sg_free_append_table(&buf->table); + kvfree(buf->page_list); kfree(buf); } @@ -434,39 +435,33 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf, unsigned int to_fill; int ret; - to_fill = min_t(unsigned int, npages, PAGE_SIZE / sizeof(*page_list)); - page_list = kvzalloc(to_fill * sizeof(*page_list), GFP_KERNEL_ACCOUNT); + to_fill = min_t(unsigned int, npages, PAGE_SIZE / sizeof(*buf->page_list)); + page_list = kvzalloc(to_fill * sizeof(*buf->page_list), GFP_KERNEL_ACCOUNT); if (!page_list) return -ENOMEM; + buf->page_list = page_list; + do { filled = alloc_pages_bulk_array(GFP_KERNEL_ACCOUNT, to_fill, - page_list); - if (!filled) { - ret = -ENOMEM; - goto err; - } + buf->page_list + buf->npages); + if (!filled) + return -ENOMEM; + to_alloc -= filled; ret = sg_alloc_append_table_from_pages( - &buf->table, page_list, filled, 0, + &buf->table, buf->page_list + buf->npages, filled, 0, filled << PAGE_SHIFT, UINT_MAX, SG_MAX_SINGLE_ALLOC, GFP_KERNEL_ACCOUNT); if (ret) - goto err; + return ret; buf->npages += filled; - /* clean input for another bulk allocation */ - memset(page_list, 0, filled * sizeof(*page_list)); to_fill = min_t(unsigned int, to_alloc, - PAGE_SIZE / sizeof(*page_list)); + PAGE_SIZE / sizeof(*buf->page_list)); } while (to_alloc > 0); - kvfree(page_list); return 0; - -err: - kvfree(page_list); - return ret; } struct mlx5_vhca_data_buffer * diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h index 25dd6ff54591..5b764199db53 100644 --- a/drivers/vfio/pci/mlx5/cmd.h +++ b/drivers/vfio/pci/mlx5/cmd.h @@ -53,6 +53,7 @@ struct mlx5_vf_migration_header { }; struct mlx5_vhca_data_buffer { + struct page **page_list; struct sg_append_table table; loff_t start_pos; u64 length;